AI Legal Liability: The Implications of ChatGPT in Litigation

The Case Was Settled, But ChatGPT Thought Otherwise: A Dispute Poised to Define AI Legal Liability

On March 4, 2026, Nippon Life Insurance Company of America (“Nippon Life”) filed a lawsuit against OpenAI Foundation and OpenAI Group PBC in the U.S. District Court for the Northern District of Illinois. The lawsuit claims that a covered employee’s extensive use of the artificial intelligence (AI) tool, ChatGPT, for pro se litigation led the chatbot to engage in tortious interference with a contract, abuse of process, and the unlicensed practice of law.

Factual Background

The Underlying Disability Claim

According to Nippon Life’s complaint, Graciela Dela Torre, who suffered from carpal tunnel syndrome and epicondylitis, submitted a long-term disability (LTD) benefits claim to her employer, a Tokyo-based logistics company insured by Nippon Life. Dela Torre’s LTD benefits were terminated in November 2021, prompting her to sue Nippon Life in December 2022. The parties reached a settlement in January 2024, wherein Dela Torre signed a release, waiving any future claims against Nippon Life. This settlement resulted in the dismissal of her claims with prejudice by the court.

The ChatGPT Intervention

One year after the settlement, Dela Torre allegedly grew dissatisfied, believing the settlement might have resulted from “potential errors or omissions.” After contacting her former attorney, who assured her there were no errors, she turned to ChatGPT, asking if she was being “gaslighted.” The complaint alleges that ChatGPT affirmed her suspicions, leading Dela Torre to terminate her attorney’s services and rely on ChatGPT as her legal advisor, ultimately preparing to reenter the court system as a pro se litigant.

In response to her prompts, ChatGPT supposedly generated legal arguments, including claims that her former counsel pressured her into signing a blank signature page. Dela Torre then filed a motion to reopen her case, despite Nippon Life’s contention that the chatbot was “aware of the settlement agreement.”

The Cascade of AI-Generated Litigation

On February 13, 2025, the court denied Dela Torre’s motion to reopen the case. The day before, she had filed a new lawsuit against another insurer, later amending the complaint to include Nippon Life, reasserting the same claims. Across both proceedings, the complaint alleges that Dela Torre filed numerous motions and notices—over 44 filings—all generated with ChatGPT’s assistance. Among these was a citation to a fabricated case, “Carr v. Gateway, Inc.,” which the complaint states “only exists in Dela Torre’s papers and the ‘mind of ChatGPT.'” This conduct is characterized as stemming from “sustained animosity rather than any objective legal purpose.”

Causes of Action

The March 4 complaint by Nippon Life presents three key causes of action against OpenAI:

  • Count I: Tortious Interference with Contract – Nippon Life alleges that OpenAI, via ChatGPT, intentionally interfered with the binding settlement agreement by encouraging Dela Torre to breach its terms.
  • Count II: Abuse of Process – Nippon Life contends that the creation of numerous meritless court filings constitutes an abuse of the judicial process, emphasizing the volume of filings served no legitimate legal purpose.
  • Count III: Unauthorized Practice of Law (UPL) – This novel claim asserts that OpenAI violated Illinois statutes regarding the unauthorized practice of law, highlighting that “ChatGPT is not an attorney.”

Relief Sought

Nippon Life seeks:

  • $300,000 in compensatory damages for losses incurred.
  • $10 million in punitive damages to deter similar conduct.
  • A declaratory judgment that OpenAI violated Illinois laws regarding the unauthorized practice of law.
  • A permanent injunction barring OpenAI from providing legal advice to Dela Torre.

Key Evidentiary and Strategic Points

OpenAI revised its usage policies in October 2024 to prohibit reliance on ChatGPT for legal advice, which Nippon Life argues illustrates OpenAI’s recognition of foreseeable risks. Additionally, the complaint argues that OpenAI’s marketing of ChatGPT’s bar exam performance contributed to Dela Torre’s belief that the AI could function as her lawyer.

Notably, Dela Torre is not named as a defendant in the complaint, which focuses liability solely on OpenAI. In response, OpenAI has stated that the complaint “lacks any merit whatsoever,” and no formal legal representation has been established for the defendants.

Takeaways

This case raises pivotal questions about AI governance: when does a chatbot’s output transition from general information to the practice of law? The Unauthorized Practice of Law rules aim to protect the public and legal integrity from non-lawyer incompetence. The Northern District of Illinois may set a crucial precedent in determining AI’s role and liability in legal matters.

As this case unfolds, it has the potential to reshape the interaction between AI tools and regulatory frameworks across industries.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...