Face Value: ABA Article on Facial Recognition technology

Michael Naughton authored Considering Face Value: The Complex Legal Implications of Facial Recognition Technology. This article was published in the American Bar Association Criminal Justice Magazine, in the Winter 2025 edition, available here. A summary of the article is below:

Considering Face Value: Unpacking the Complex Legal Implications of Facial Recognition Technology

Have you ever wondered how facial recognition technology (FRT) works, and more importantly, how it's impacting our lives and legal systems? Recent headlines have highlighted some disturbing instances of wrongful arrests and misuse of this powerful technology. In today’s post, we’re diving deep into the complex legal and ethical issues surrounding FRT in law enforcement.

A Case of Mistaken Identity: The Story of Robert Williams

In January 2020, Robert Williams, a Black man living in Detroit, Michigan, experienced a harrowing ordeal. He was wrongfully arrested in front of his family and detained for 30 hours, all because a facial recognition system misidentified him as a shoplifter. The investigator relied almost entirely on this faulty match, leading to a traumatic experience for Williams. This story isn't an isolated incident, and it raises serious questions about the reliability and biases of FRT.

What Exactly is Facial Recognition Technology?

FRT is a type of artificial intelligence (AI) that analyzes facial features to verify a person’s identity. It works through four main components:

  1. Detection: Locating a face within an image or video.

  2. Analysis: Creating a geometric map of the face, measuring distances between features.

  3. Recognition: Confirming a person’s identity from an image.

  4. Identification: Comparing the face to a database of images to identify the person.

This technology has advanced rapidly, transforming how both private and public law enforcement investigations are conducted. However, its increasing use brings concerns about privacy violations and racial biases.

A Brief History of FRT

FRT's development spans over 60 years. It began in 1964 with researchers studying facial recognition computer programming. Key milestones include:

  • 1991: The development of "Eigenfaces" at MIT, using statistical analysis to identify facial patterns.

  • 1998: The Defense Advanced Research Projects Agency launched the Face Recognition Technology (FERET) program, creating a large database of facial images.

  • 2006: The advent of deep learning, which significantly improved FRT accuracy.

  • 2014: Facebook unveiled Deepface, an algorithm with a near-human accuracy rate in recognizing faces.

These advancements led to widespread deployment of FRT in law enforcement, airports, and even entertainment venues.

Widespread Use and Growing Concerns

By 2016, an estimated 50% of American adults were in law enforcement facial recognition databases. FRT is now used at borders, in health care to identify patients and diagnose conditions, and even in schools for security. However, this widespread use is accompanied by serious concerns.

The Problem of Bias in FRT Systems

Studies have shown that FRT systems have higher rates of false positives for certain demographic groups, particularly Asians, African Americans, and Native groups. This bias can lead to wrongful accusations and arrests. For example, New York State has banned FRT use in schools due to concerns about these biases.

State Laws and Legal Battles

Several states have enacted laws to regulate FRT. Illinois’ Biometric Information Privacy Act (BIPA) limits private firms' collection of biometric data without consent. Massachusetts, Virginia, and Washington have laws outlining authorized uses of FRT by law enforcement and restrictions on its use. Some cities, like San Francisco and Portland, have even banned FRT use by government agencies.

A recent New Jersey court case, State v. Arteaga, highlighted a defendant's right to discovery related to FRT used in their case, emphasizing the need for transparency and the ability to challenge the technology's reliability.

The Robert Williams Case and Its Aftermath

Robert Williams’ case led to a settlement with the City of Detroit, resulting in new guidelines for FRT use. Now, FRT can only be used in serious crime investigations, and any leads must be corroborated by independent evidence. This case underscored the importance of human oversight and proper procedure in using FRT.

The FTC Steps In: Rite Aid’s FRT Practices Prohibited

The Federal Trade Commission prohibited Rite Aid from using FRT for surveillance for five years after alleging the company used it without reasonable safeguards, disproportionately affecting low-income and nonwhite neighborhoods. This action highlighted the need for accountability and proper oversight in commercial FRT use.

Future Considerations and the Need for Transparency

As FRT continues to evolve, federal legislation is needed to establish clear guidelines. Transparency is crucial, both in the technology itself and in how it’s used. Attorneys must educate themselves about FRT to challenge its misuse in court and advocate for their clients' rights.

Conclusion

FRT has the potential to enhance security and aid investigations, but its widespread use requires careful consideration of privacy rights, racial biases, and the potential for misuse. We must strive to find a balance between technological advancement and the protection of civil liberties. As AI and FRT evolve, so too must our laws and practices to ensure a just and equitable future.

Previous
Previous

Arkansas Public Defenders 2024