
A recent incident involving Michael Cohen, former attorney for Donald Trump, and artificial intelligence in court proceedings has sparked vital discussions about AI’s ethical and practical implications in the legal sphere, particularly in court filings.
Citing Non-Existent Cases
In November 2023, David M. Schwartz, the attorney for Michael Cohen, filed a request for the early termination of Cohen’s supervised release. In his submission, Schwartz cited several District Court cases, notably “United States v. Figueroa-Florez,” “United States v. Ortiz,” and “United States v. Amato.” These opinions, complete with detailed case numbers, summaries, and decision dates, appeared to be relevant precedents for granting defendants early release from supervision. Intriguingly, the cases included narratives about two supposed cocaine distributors and a tax evader. However, a closer examination revealed that all these cases were completely fabricated.
Recognizing these discrepancies, US District Judge Jesse Furman issued an order for Schwartz to furnish copies of the cited decisions or face sanctions under Rule 11(b)(2) & (c) of the Federal Rules of Civil Procedure. In response, Schwartz filed an explanation, regretfully admitting his failure to personally verify the citations before submission.
This incident is not isolated. A similar occurrence took place earlier this year, where a lawsuit was dismissed and a $5,000 fine imposed on attorneys for using AI tools to cite non-existent cases. Additionally, a federal judge has recently introduced a rule banning AI-written submissions unless verified by a human, underscoring the evolving legal landscape shaped by AI.
Pleading Naïveté
In Cohen’s case, the use of Google Bard, a generative text service, was cited as the source of the fictitious references. Cohen, who lost his license to practice law five years ago due to disbarment, admitted to having a misconception about what Google Bard is:
As a nonlawyer, I have not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not. Instead, I understood it to be a super‑charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online. I did not know that Google Bard could generate non-existent cases, nor did I have access to Westlaw or other standard resources for confirming the details of cases. Instead, I trusted Mr. Schwartz and his team to vet my suggested additions before incorporating them.
Cohen subsequently shifted some of the responsibility onto Schwartz, remarking:
“It did not occur to me then and remains surprising to me now—that Mr. Schwartz would drop the cases into his submission wholesale without even confirming that they existed. I deeply regret any problems Mr. Schwartz’s filing may have caused.”
Lessons for the Legal Community
The Cohen case highlights critical concerns about the ethical implications of using AI in legal work. AI provides substantial benefits, such as efficiency and access to data-driven insights. However, its application in creating legal references is laced with risks. The incident with Cohen demonstrates the dangers of relying on AI for legal tasks without proper verification, which can lead to significant ethical and legal issues.
AI systems are advanced but not perfect. They can handle large data sets but may not fully grasp the nuances required for legal research. These models are sometimes prone to generating incorrect or fabricated information, especially if they don’t have constraints to limit outcomes. This flaw was evident in Cohen’s case, where AI-generated responses seemed factual but were not.
Consequently, it’s essential for law firms and legal professionals to establish strong protocols for AI usage. Without careful management, lawyers risk submitting false court decisions or inaccurate legal analyses, potentially harming their clients, wasting valuable time and resources of the court and opposing parties, and damaging the reputations of judges and courts cited in these fictitious rulings. Maintaining the legal system’s integrity and trustworthiness necessitates a careful and responsible approach to AI integration in legal practices.
Conclusion
The incident involving Michael Cohen and AI-generated legal citations marks a significant turning point in the intersection of legal ethics and technology. This situation underscores both the advantages and challenges of integrating AI into legal practices, highlighting the importance of its ethical and conscientious use. As the legal profession explores this emerging landscape, it is imperative to approach with a heightened sense of responsibility, ensuring AI is utilized to augment justice rather than to substitute for human judgment.
This event serves as a crucial call to action for the legal community to evolve in response to technological advancements, while steadfastly maintaining the highest standards of ethics. It presents a unique chance to reshape legal practices in the era of AI. This transformation should aim to ensure that technology contributes positively to the noble quest for justice, doing so in a way that is fair, transparent, and respectful of the rights and dignity of everyone involved.





Leave a reply to Case Laws Cancel reply