AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches
There is no doubt that artificial intelligence now offers a powerful advantage for high-level legal work. With widespread availability of generative AI, legal scholars, technologists, and product developers have demonstrated how these tools, when used properly, can enhance human performance. AI excels in accessing, organizing, and processing vast volumes of information; attorneys and judges contribute judgment, experience, empathy, and ethical responsibility. Used together, the combination can be extremely effective.
However, many dedicated professionals have overlooked a clear and relentless truth: AI can be wrong; the human user remains fully responsible; and verification is not optional. Responsible lawyers must check every citation and statement in a legal filing against original, authoritative sources. There is no shortcut for this obligation.
Failure to adhere to this basic discipline has produced a growing number of cases involving AI “hallucinations,” i.e., fabricated or inaccurate citations and statements appearing in court filings. Despite widespread publicity, judicial warnings, ethical guidelines, and sanctions, the problem has not decreased. A publicly maintained database tracking judicial decisions related to AI-generated hallucinations reports over 500 cases in U.S. courts at the time of writing. Faced with repeated violations, courts have increased sanctions to deter this behavior.
The Case in Context: What the Neusom Case Is and Is Not
The Florida Bar case against an attorney is not a clean example of a single AI error leading to suspension. The Florida Supreme Court order is brief and mostly procedural; the Bar’s complaint and referee’s report provide substantive findings. These materials describe an attorney who repeatedly ignored court orders, reasserted arguments already rejected by the court, misrepresented legal authority, and failed to correct fabricated citations even after deficiencies were identified by opposing counsel.
The hallucinated citations in Neusom corroborate broader behavior; they do not define it. The record reflects a sustained pattern of misconduct. Beyond inaccurate and fabricated citations, the referee found repeated violations of local rules, improper labeling of pleadings, improper attempts to relitigate jurisdictional issues, and filing a bankruptcy petition in bad faith to avoid eviction. The case involved prior federal sanctions for misconduct and misrepresentation. In that context, AI-generated hallucinations were not the trigger for discipline but served as corroborating evidence of professional inadequacy already established by broader conduct.
Unintended Consequences: AI Literacy, Deterrence, and the Risk of Overcorrection
Bar associations publish summaries of disciplinary cases to remind lawyers of their professional obligations and to promote education and deterrence. However, when disciplinary decisions are mentioned without context, the risk is not underenforcement but overcorrection.
Neusom illustrates that courts and disciplinary authorities will scrutinize negligent or improper use of generative AI. Such scrutiny is both appropriate and necessary, but mischaracterizing the case risks wider negative effects: growing distrust of or avoidance of AI tools. That reaction would be counterproductive. Attorneys and judges who take the time to understand how generative AI systems function can use these tools safely and effectively.
When Disciplinary Cases Are Remembered as Headlines Instead of Full Readings
This observation is not a criticism of bar publications but a subtler concern. When sanctions for AI misconduct are cited without context, lawyers may conclude that any hallucination error threatens their license. Such a perception discourages engagement, learning, and transparency precisely when the profession most needs informed and responsible adoption of AI tools.
Lessons for Judges
The central lesson of Neusom is not that generative AI use in legal drafting justifies suspension when errors occur. Rather, courts and disciplinary authorities will closely examine an attorney’s conduct when AI-generated errors appear as part of a broader pattern of disregard for professional obligations.
Sanctions for AI-related errors must therefore depend not only on the error itself but on context, intent, repetition, and corrective behavior. Treating Neusom as a standalone rule risks erasing distinctions and converting negligence into presumed bad faith.
A Fault-Based Framework for Sanctioning AI Hallucinations
To avoid an outcome-based escalation and inconsistent discipline, courts need a principled and repeatable way to assess AI-related misconduct. A fault-based framework allows firm responses where necessary while preserving proportionality and fairness in cases of isolated AI-related errors.
Sanctions for AI-related errors must be based on fault, not just result. False citation or inaccurate legal statement errors cause unnecessary costs and can threaten just outcomes. Ethical obligations of competence, candor, and diligence are clear, and verifying citations before filing is a requirement every attorney must meet.
Education, Competence, and the Future
Any discussion of sanctions for AI hallucinations would be incomplete without acknowledging a deeper issue. The legal profession is still struggling with AI education. Generative AI is not a passing phenomenon, and lawyers and judges will continue to encounter it in practice. Institutions have a responsibility to ensure that legal professionals understand how these tools work and how to use them responsibly.
The solution is not only tougher sanctions but better education accompanied by clear expectations. Avoiding hallucinations requires understanding AI system limitations and using them with a human firmly engaged, verifying all citations and factual statements against original sources before filing. These obligations apply regardless of whether the work is generated by a junior associate, paralegal, or AI tool.
Conclusion
The increase of AI-generated hallucinations in court filings presents a real and serious challenge to the legal profession. Courts and disciplinary authorities are right to respond firmly to conduct that undermines candor, competence, and the integrity of the judicial process. Sanctions remain an essential tool for deterrence and correction.
However, discipline works best when grounded in context and fault. Neusom, properly understood, reinforces that principle rather than undermining it, illustrating how AI-related errors can corroborate broader misconduct. Treating it differently risks an outcome-based escalation disconnected from intent, proportionality, and established disciplinary norms.
With generative AI increasingly integrated into legal practice, the profession’s task is twofold: enforce existing ethical obligations with clarity and consistency and ensure that lawyers and judges are equipped to meet those obligations through meaningful education and oversight.