ChatGPT as the Enemy: New Sanctions Against Lawyers Relying on ChatGPT
In recent developments, the use of ChatGPT by legal professionals has come under fire, raising concerns about its reliability and ethical implications. A case involving a lawyer in Kansas highlights the potential consequences of relying on this technology in the legal field.
The Kansas Case
A lawyer faced severe sanctions and reputational damage after utilizing ChatGPT to fill in citations for a legal brief while dealing with a personal emergency. Instead of seeking an extension from opposing counsel or the court, the lawyer opted for a quick fix, which ultimately backfired.
It is essential to note that when faced with emergencies, legal professionals should communicate their situations openly. In most cases, courts are sympathetic and willing to grant extensions.
Misleading Information
In the case of Lexos Media v. Overstock, ChatGPT produced numerous incorrect citations that misled the lawyer. Some of the fabricated citations included:
- Liquid Dynamics Corp. v. Vaughan Co., Inc., 449 F.3d 1209, 1224 (Fed. Cir. 2006): “Expert testimony should not be excluded simply because the expert applied an incorrect claim construction.”
- AVM Technologies, LLC v. Intel Corp., 927 F.3d 1364, 1370–71 (Fed. Cir. 2019): “[T]he appropriate response to a potential flaw in an expert’s methodology is cross examination, not exclusion.”
- Hockett v. City of Topeka, No. 19-4037-DDC, 2020 WL 6796766, at *3 (D. Kan. Nov. 19, 2020): “The exclusion of evidence is an extreme sanction, and courts should prefer less severe remedies.”
- Woodworker’s Supply, Inc. v. Principal Mut. Life Ins. Co., 170 F.3d 985, 993 (10th Cir. 1999): “Courts consider the prejudice or surprise to the party against whom the testimony is offered.”
- i4i Ltd. Partnership v. Microsoft Corp., 598 F.3d 831, 854 (Fed. Cir. 2010): “[T]he question of whether the expert is credible is for the jury to decide after cross examination.”
This extensive misinformation illustrates the inherent risks of using generative AI tools in legal contexts. ChatGPT is not merely a tool but can create misleading content that may have serious repercussions.
Collective Responsibility
In the Lexos Media case, all lawyers associated with the brief were held accountable for the misconduct, even those who did not draft the document. This includes firms such as Fisher, Patterson, Sayler & Smith, LLP and Buether Joe & Counselors, LLC. This situation emphasizes that the misuse of AI tools can have far-reaching consequences for all parties involved.
The Dark Side of AI
Furthermore, the implications of AI extend beyond the courtroom. A troubling incident involving a sixteen-year-old who committed suicide after interacting with ChatGPT raises alarm bells about the potential dangers of AI technology. This tragic outcome highlights the urgent need for caution and accountability in the deployment of such tools.
In conclusion, while ChatGPT may offer certain conveniences, its reliability and ethical implications in professional settings remain highly questionable. Legal professionals are urged to reconsider their reliance on generative AI technologies and prioritize the integrity of their practice.