Rite Aid’s Facial Recognition Debacle
In December 2023, the Federal Trade Commission (FTC) ruled that Rite Aid, one of the largest U.S.-based drugstore chains, is banned from using facial recognition technology in its retail stores for the next five years. The decision comes after multiple allegations of misuse, plus the organization’s inappropriate actions stemming from its use.
In this article, we’ll look at what went wrong and explore how buyers can be a force for positive development in the AI evolution.
“Shrinkage” is a top topic in the retail world. Brick and mortar stores have an obligation to curtail as much theft as is possible in order to maintain a healthy bottom line, and surveillance cameras have, for decades, been a reliable tool in that endeavor. What’s more recent, however, is the use of AI-based facial recognition technology in retail stores. In theory, when the technology is top-tier and the humans using it are knowledgeable and honest, it can be an effective approach. But when these elements are missing, that’s when trouble sets in. And that’s apparently what happened at Rite Aid.
In the Rite Aid case, the FTC “charges that the retailer failed to implement reasonable procedures and prevent harm to consumers in its use of facial recognition in hundreds of stores.” But reading more deeply into the charges, it’s clear that this is more than a case of mere negligence. Not only did someone or a team at Rite Aid corporate decide to buy and implement facial recognition software at some of their stores, but according to the FTC report they also:
Chose a problematic vendor;
Did not test for or request audit information about the accuracy of the vendor’s system before implementation;
Did not test or monitor the system for false positives during the deployment period;
Allowed inadequately-trained employees to make decisions about potential shoplifters, based on unchecked data;
Failed to inform shoppers that facial recognition technology was in use;
Discouraged employees from revealing information to shoppers about the program; and
Allowed low-quality images from store CCTV cameras, mobile phone cameras, and mass media articles (so, pictures of pictures…) to be used as data inputs.
Regarding the first bullet — choosing a problematic vendor — a Reuters story from 2020 detailed how Rite Aid selected an AI-based facial recognition vendor whose product was known to produce high false positive rates, in particular, when it was used to identify people of color. Further, irrespective of the quality of the tool (or perhaps because of it) the vendor reportedly included language in its customer contracts that denied liability for inaccuracies in data processed by the tool — something that should have, at least, signaled caution to buyers.
Thus, precarious decision-making on the part of Rite Aid execs is evident from the get-go. Continuing down the list, any of the bullets would constitute a very poor IT or cybersecurity program. None of these things, in isolation, should happen when IT and security teams are skilled, trained, abiding by industry standards and frameworks, and held accountable for their actions. In totality, Rite Aid’s actions should be abhorrent to any security or IT practitioner with a modicum of integrity. Oh, and I’d be remiss if I didn’t mention that the December 2023 FTC ruling came 10 years after it imposed charges on Rite Aid for “failure to protect the sensitive financial and medical information of its customers and employees.” In other words, Rite Aid has a history of disturbing data security practices (or maybe non-practices).
AI improvement, but still far from perfect
As interest in and use of artificial intelligence (AI) grows, builders and buyers of AI are going to have to take a larger governance role, ensuring that its outputs are used for good rather than harm. While Rite Aid tried to sidestep the issue of wrongdoing by stating that the company stopped its use of the facial recognition technology three years prior to the FTC’s investigation, the investigation uncovered that Rite Aid failed to properly vet the technology before using and deploying it, and additionally failed to allocate the proper resources to safely manage the surveillance program. The three year lag is not terribly relevant.
Still, we have to account for history. Facial recognition software was not terribly accurate or reliable in 2012, when Rite Aid first deployed it. What’s more, guidelines and established processes for using it were extremely limited back then. So could Rite Aid simply be remiss for buying and using a spotty technology when better options were not yet commercially available? Perhaps. The vendor probably shouldn’t have been selling software with so many flaws. However, they did cover their a$$ets in their contracts, which should have been a huge warning flag to buyers that the onus would be on them to use the tech properly, to analyze the results carefully, and to take ownership for any decisions and actions resulting from the technology’s use.
Then there’s the next big question: Should Rite Aid have upgraded its facial recognition program as more accurate systems became available?
There is undeniable proof that facial recognition technology has improved a lot in the last twenty years, including the eight years Rite Aid was using it. One NIST study claims that "high-performing" algorithms produce a 20X improvement since 2013. This means that Rite Aid had plenty of opportunities to upgrade the technology and to correct any misuse or faulty practices.
What does it mean?
The recent charges from the FTC are notable because it’s the first instance in the U.S. in which a government body has imposed restrictions on a business. This should not be too surprising for anyone watching what’s happening with AI in the workplace. Though many people are bullish on AI, there are a lot of concerns about its unchecked power. And, to be sure, many companies are forging ahead with AI-based products and services without putting enough time and attention into the algorithms and training models. And you can’t have one without the other when it comes to AI; a good algorithm with bad data inputs will result in inaccurate outputs. On the flip side, a badly written algorithm, even if the training data is as clean and accurate as can be, won’t produce reliable results. Rite Aid was using bad technology and bad data. They hit all the “bads.”
But back to the ruling and its general impact — what does this mean for facial recognition technology builders and buyers today?
NIST and facial recognition testing
To encourage innovation and improvement, the National Institute of Standards and Technology (NIST) provides guidance, data, and testing for many areas, including facial recognition. The NIST Face Recognition Vendor Tests (FRVT) program, initiated in 2000 and split into two subsections (face recognition and face analysis), tests vendor offerings for accuracy. As noble as their efforts are, the FRVT is limited by its own data inputs.
To start, all testing is voluntary, which means that only a small subsection of existing vendors is tested. Substandard vendors are not required to participate, which makes it extremely difficult to assess how well facial recognition tools, as a whole, perform.
Second, NIST recognizes that the industry has not yet standardized, which means that vendor offerings can’t always be compared. It’s challenging to determine the accuracy of a category of technologies when each vendor’s components vary widely. Further, many vendors don’t want to reveal their “secret sauce,” meaning they won’t open up all components to testing.
As long as vendors are not required to certify their offerings, abide by regulations, or be held liable when their technology is misused or abused, low-quality products will exist. If low-quality vendors offer low-cost options, some buyers will take their shot at using them and gamble on the outcomes.
Present-day status
What NIST is able to assess, however, is general trends in facial recognition technology accuracy. According to one recent report, “Accuracy varies notably across algorithms and that algorithms’ accuracy varies across different types of images.” While the data in this report is somewhat skewed, leave it to The Reformed Analyst to take a contrarian point of view.
The report says that “Forty-five of the 105 identified algorithms were >99% accurate when comparing probe templates from high-quality images to a gallery of 1.6 million templates from high-quality images.” Yay! What reliable technology we have!
Hold up: Let’s read that a little more closely: Less than 43% of algorithms have greater than 99% accuracy. So, quite a bit more than half are less accurate, though the report doesn’t detail by how much. It could be 98% or it could be 22%.
That same positive statement turns a little less impressive when the number of templates is increased to 3 million. In that test, only three algorithms maintained the same level of accuracy.
When it comes to assessment across race and sex, the results grow more dim. If you have time, scroll through this 82-page report published by NIST. In short, false negatives and false positives vary tremendously by algorithm but are both notably higher for Black, Latino, and Asian individuals and women than they are for white men. Further, there is less accuracy in identifying young and older individuals than middle aged individuals.
Looking forward
What this means is that the current state of facial recognition technology is inconsistent. For some use cases, it’s an excellent solution. For others, it isn’t there yet. It also means that users will have to be diligent about how they use the technology, irrespective of the efficacy (demonstrated or self-stated) of the vendor offering. A company that buys a great technology — be it facial recognition or otherwise — must:
Properly vet the vendor before signing a contract
Monitor and audit data outputs during use
Establish best practices for operating the tool
Maintain governance over actions resulting from the technology’s use
Failure to do so will result in poor decision making and possibly worse. In the case of facial recognition technology, we’re talking about data privacy issues, bias and discrimination, personal harm, and the intentional or unintentional spread of disinformation and misinformation. That’s just to start.
All of the above being said, this post is not meant to be a warning. It is meant to level set the state of AI-based products and facial recognition technology. It is also meant to be a reminder that no technology is plug-and-play. The “best” products in the world aren’t fruitful if they are mismanaged (or unmanaged). And good decisions can’t be made using flawed data. As a security community we all have a responsibility to continuously improve processes and people skills. Technology is just one tool in the proverbial toolbox.