The Case Was Settled, But ChatGPT Thought Otherwise: A Dispute Poised to Define AI Legal Liability
On March 4, 2026, Nippon Life Insurance Company of America (“Nippon Life”) filed a lawsuit against OpenAI Foundation and OpenAI Group PBC in the U.S. District Court for the Northern District of Illinois. The lawsuit claims that a covered employee’s extensive use of the artificial intelligence (AI) tool, ChatGPT, for pro se litigation led the chatbot to engage in tortious interference with a contract, abuse of process, and the unlicensed practice of law.
Factual Background
The Underlying Disability Claim
According to Nippon Life’s complaint, Graciela Dela Torre, who suffered from carpal tunnel syndrome and epicondylitis, submitted a long-term disability (LTD) benefits claim to her employer, a Tokyo-based logistics company insured by Nippon Life. Dela Torre’s LTD benefits were terminated in November 2021, prompting her to sue Nippon Life in December 2022. The parties reached a settlement in January 2024, wherein Dela Torre signed a release, waiving any future claims against Nippon Life. This settlement resulted in the dismissal of her claims with prejudice by the court.
The ChatGPT Intervention
One year after the settlement, Dela Torre allegedly grew dissatisfied, believing the settlement might have resulted from “potential errors or omissions.” After contacting her former attorney, who assured her there were no errors, she turned to ChatGPT, asking if she was being “gaslighted.” The complaint alleges that ChatGPT affirmed her suspicions, leading Dela Torre to terminate her attorney’s services and rely on ChatGPT as her legal advisor, ultimately preparing to reenter the court system as a pro se litigant.
In response to her prompts, ChatGPT supposedly generated legal arguments, including claims that her former counsel pressured her into signing a blank signature page. Dela Torre then filed a motion to reopen her case, despite Nippon Life’s contention that the chatbot was “aware of the settlement agreement.”
The Cascade of AI-Generated Litigation
On February 13, 2025, the court denied Dela Torre’s motion to reopen the case. The day before, she had filed a new lawsuit against another insurer, later amending the complaint to include Nippon Life, reasserting the same claims. Across both proceedings, the complaint alleges that Dela Torre filed numerous motions and notices—over 44 filings—all generated with ChatGPT’s assistance. Among these was a citation to a fabricated case, “Carr v. Gateway, Inc.,” which the complaint states “only exists in Dela Torre’s papers and the ‘mind of ChatGPT.'” This conduct is characterized as stemming from “sustained animosity rather than any objective legal purpose.”
Causes of Action
The March 4 complaint by Nippon Life presents three key causes of action against OpenAI:
- Count I: Tortious Interference with Contract – Nippon Life alleges that OpenAI, via ChatGPT, intentionally interfered with the binding settlement agreement by encouraging Dela Torre to breach its terms.
- Count II: Abuse of Process – Nippon Life contends that the creation of numerous meritless court filings constitutes an abuse of the judicial process, emphasizing the volume of filings served no legitimate legal purpose.
- Count III: Unauthorized Practice of Law (UPL) – This novel claim asserts that OpenAI violated Illinois statutes regarding the unauthorized practice of law, highlighting that “ChatGPT is not an attorney.”
Relief Sought
Nippon Life seeks:
- $300,000 in compensatory damages for losses incurred.
- $10 million in punitive damages to deter similar conduct.
- A declaratory judgment that OpenAI violated Illinois laws regarding the unauthorized practice of law.
- A permanent injunction barring OpenAI from providing legal advice to Dela Torre.
Key Evidentiary and Strategic Points
OpenAI revised its usage policies in October 2024 to prohibit reliance on ChatGPT for legal advice, which Nippon Life argues illustrates OpenAI’s recognition of foreseeable risks. Additionally, the complaint argues that OpenAI’s marketing of ChatGPT’s bar exam performance contributed to Dela Torre’s belief that the AI could function as her lawyer.
Notably, Dela Torre is not named as a defendant in the complaint, which focuses liability solely on OpenAI. In response, OpenAI has stated that the complaint “lacks any merit whatsoever,” and no formal legal representation has been established for the defendants.
Takeaways
This case raises pivotal questions about AI governance: when does a chatbot’s output transition from general information to the practice of law? The Unauthorized Practice of Law rules aim to protect the public and legal integrity from non-lawyer incompetence. The Northern District of Illinois may set a crucial precedent in determining AI’s role and liability in legal matters.
As this case unfolds, it has the potential to reshape the interaction between AI tools and regulatory frameworks across industries.