Arkansas Public Defenders 2024
Michael Naughton presented to the Arkansas Public Defender Summit in November, 2024 on "Learning how to Utilize Justice Optimized to Power AI in Practice". The presentation focused on using AI tools, specifically large language models (LLMs), in a legal context. The presentation covered various aspects of AI, prompting techniques, and considerations for ethical and effective usage.
Key Points:
Introduction to AI: The presentation introduces different types of AI, focusing on chat-based models such as OpenAI's ChatGPT (including 4o and Preview o-1), Google's Gemini, Facebook's Llama3.2, Perplexity, and Claude 3.5. It also briefly touches upon the use of 3D models in court cases, specifically for visualizing locations.
Prompt Engineering: A significant portion of the presentation is dedicated to AI prompting. It emphasizes that AI is only as good as the prompts it receives. Effective prompts should be communicative, strategic, and on average, around 21 words. It outlines four main areas to consider when writing prompts:
Persona: Defining the role the AI should assume (e.g., criminal defense attorney, probation agent).
Task: Clearly stating what the AI should do (e.g., conduct research, draft a memo).
Context: Providing background information related to the task (e.g., details about a client's history).
Format: Specifying how the output should be presented (e.g., bullet points, long-form paragraphs).
Examples and Tips: The presentation includes numerous examples of prompts and follow-up prompts, covering scenarios such as summarizing letters, brainstorming arguments, and researching specific topics. It also offers tips for prompt writing:
Use natural language.
Be specific and provide context.
Specify the desired format.
Continue the conversation and refine prompts as needed.
Don't push AI into hallucinations (incorrect responses).
Consider the voice and persona, and maintain personal advocacy.
AI Limitations and Ethical Considerations: The presentation highlights limitations and ethical concerns related to using AI in legal settings. It specifically quotes the terms of use from OpenAI and Meta Llama 3, emphasizing that AI output should not be used for legal or material impact on individuals. It also discusses data collection policies of Google Gemini.
AI Model Comparison: The presentation briefly compares different AI models:
ChatGPT: Described as the gold standard, with fine-tuning capabilities.
Google Gemini: Noted for its integration with Google Workspaces.
Llama 3.2: Focuses on local hosting for increased security.
Claude 3.5: Mentioned for its robust LLM and ability to control computers.
Each model has its own data usage and policy considerations.
Application in Legal Practice: The core message is that AI should be used as a tool to enhance legal work, not replace it. Attorneys should use AI to:
Construct compelling narratives for their clients.
Research areas relevant to their cases.
Brainstorm ideas and perspectives.
Craft richer sentencing memorandums and verbal allocutions.
Avoid simply copying and pasting AI output.
Conclusion: The presentation concludes by emphasizing that the act of crafting prompts is crucial. This process helps attorneys actively analyze cases, investigate variables, and brainstorm persuasive arguments. The real value of AI lies in the way it can assist attorneys in thinking more deeply and comprehensively about their clients and their cases.
Link to the presentation here.
Face Value: ABA Article on Facial Recognition technology
Michael Naughton authored Considering Face Value: The Complex Legal Implications of Facial Recognition Technology. This article was published in the American Bar Association Criminal Justice Magazine, in the Winter 2025 edition, available here. A summary of the article is below:
Considering Face Value: Unpacking the Complex Legal Implications of Facial Recognition Technology
Have you ever wondered how facial recognition technology (FRT) works, and more importantly, how it's impacting our lives and legal systems? Recent headlines have highlighted some disturbing instances of wrongful arrests and misuse of this powerful technology. In today’s post, we’re diving deep into the complex legal and ethical issues surrounding FRT in law enforcement.
A Case of Mistaken Identity: The Story of Robert Williams
In January 2020, Robert Williams, a Black man living in Detroit, Michigan, experienced a harrowing ordeal. He was wrongfully arrested in front of his family and detained for 30 hours, all because a facial recognition system misidentified him as a shoplifter. The investigator relied almost entirely on this faulty match, leading to a traumatic experience for Williams. This story isn't an isolated incident, and it raises serious questions about the reliability and biases of FRT.
What Exactly is Facial Recognition Technology?
FRT is a type of artificial intelligence (AI) that analyzes facial features to verify a person’s identity. It works through four main components:
Detection: Locating a face within an image or video.
Analysis: Creating a geometric map of the face, measuring distances between features.
Recognition: Confirming a person’s identity from an image.
Identification: Comparing the face to a database of images to identify the person.
This technology has advanced rapidly, transforming how both private and public law enforcement investigations are conducted. However, its increasing use brings concerns about privacy violations and racial biases.
A Brief History of FRT
FRT's development spans over 60 years. It began in 1964 with researchers studying facial recognition computer programming. Key milestones include:
1991: The development of "Eigenfaces" at MIT, using statistical analysis to identify facial patterns.
1998: The Defense Advanced Research Projects Agency launched the Face Recognition Technology (FERET) program, creating a large database of facial images.
2006: The advent of deep learning, which significantly improved FRT accuracy.
2014: Facebook unveiled Deepface, an algorithm with a near-human accuracy rate in recognizing faces.
These advancements led to widespread deployment of FRT in law enforcement, airports, and even entertainment venues.
Widespread Use and Growing Concerns
By 2016, an estimated 50% of American adults were in law enforcement facial recognition databases. FRT is now used at borders, in health care to identify patients and diagnose conditions, and even in schools for security. However, this widespread use is accompanied by serious concerns.
The Problem of Bias in FRT Systems
Studies have shown that FRT systems have higher rates of false positives for certain demographic groups, particularly Asians, African Americans, and Native groups. This bias can lead to wrongful accusations and arrests. For example, New York State has banned FRT use in schools due to concerns about these biases.
State Laws and Legal Battles
Several states have enacted laws to regulate FRT. Illinois’ Biometric Information Privacy Act (BIPA) limits private firms' collection of biometric data without consent. Massachusetts, Virginia, and Washington have laws outlining authorized uses of FRT by law enforcement and restrictions on its use. Some cities, like San Francisco and Portland, have even banned FRT use by government agencies.
A recent New Jersey court case, State v. Arteaga, highlighted a defendant's right to discovery related to FRT used in their case, emphasizing the need for transparency and the ability to challenge the technology's reliability.
The Robert Williams Case and Its Aftermath
Robert Williams’ case led to a settlement with the City of Detroit, resulting in new guidelines for FRT use. Now, FRT can only be used in serious crime investigations, and any leads must be corroborated by independent evidence. This case underscored the importance of human oversight and proper procedure in using FRT.
The FTC Steps In: Rite Aid’s FRT Practices Prohibited
The Federal Trade Commission prohibited Rite Aid from using FRT for surveillance for five years after alleging the company used it without reasonable safeguards, disproportionately affecting low-income and nonwhite neighborhoods. This action highlighted the need for accountability and proper oversight in commercial FRT use.
Future Considerations and the Need for Transparency
As FRT continues to evolve, federal legislation is needed to establish clear guidelines. Transparency is crucial, both in the technology itself and in how it’s used. Attorneys must educate themselves about FRT to challenge its misuse in court and advocate for their clients' rights.
Conclusion
FRT has the potential to enhance security and aid investigations, but its widespread use requires careful consideration of privacy rights, racial biases, and the potential for misuse. We must strive to find a balance between technological advancement and the protection of civil liberties. As AI and FRT evolve, so too must our laws and practices to ensure a just and equitable future.