A proposed settlement between the Federal Trade Commission and Rite Aid will bar the national pharmacy retailer from using facial recognition technology in its stores over the next five years. The settlement will see the end of a case that is over 10 years in the making.
According to the FTC, between 2012 and 2020, Rite Aid used facial recognition software across several major cities and without customer notification to scan those entering the store and predict who may be likely to shoplift. The FTC says Rite Aid not only of falsely identified thousands of people as potential criminals, but that black and latino customers were disproportionally targeted, then followed around the store by employees or security guards and were even reported to the police. CNBC adds that Rite Aid's technology was more likely to generate false positive matches for shoplifters in predominantly black and Asian neighborhoods as opposed to predominantly white ones.
“This is a groundbreaking case, a major stride for privacy and civil rights, and hopefully just the beginning of a trend,” John Davison of the Electronic Privacy Information Center told The Guardian. “But it’s important to note that Rite Aid isn’t alone. Businesses routinely use unproven algorithms and snake oil surveillance tools to screen consumers, often in secret. The FTC is right to crack down on these practices, and businesses would be wise to take note. Algorithmic lawlessness is not an option any more.”
For its part, Rite Aid issued a statement on its website that the face-spotting tech was a "pilot program the Company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began." The retailer added that "we fundamentally disagree with the facial recognition allegations in the [FTC's] complaint."
Clearview
Beyond Rite Aid, facial recognition technology is increasingly used by law enforcement to identify and locate potential criminals. The BBC has found that police across the US have used software by the firm Clearview AI, which has a database of "30 billion images scraped from platforms such as Facebook taken without users' permissions", to conduct over 1 million searches. The Miami Police Department said that it uses the software for "every type of crime". Civil liberties groups have put intense pressure on law enforcement agencies to ban the software for both its violation of privacy, and for its poor track record with accurate matches.
THE VERDICT:
It's interesting to watch the furor over IP infringement claims that AI training models have sparked, but the relatively lackluster response to face-training models scraping our photos from the internet without our consent. That being said, civil and privacy rights groups continue to fight for protections and regulations, but widespread awareness of the issue remains elusive.
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.