Lawyer Faces Consequences for Misusing AI in Tribunal Hearing

Lawyer Admits to Using AI, Insufficiently Verifying Output in Law Society Tribunal Hearing

The Law Society of Ontario Tribunal’s Hearing Division addressed significant issues stemming from a lawyer’s use of generative artificial intelligence (AI) during proceedings related to the suspension of his license. This case marks a pivotal moment as it is described as the first reported instance in Canada where a lawyer submitted problematic AI-generated materials in a disciplinary hearing.

Case Background

In the case of Mazaheri v Law Society of Ontario, the applicant, a lawyer whose license was suspended on an interlocutory basis, moved on November 12, 2024, to either vary or remove this suspension. He filed a motion challenging the admissibility of evidence provided by the Law Society of Ontario and sought to recuse the tribunal panel members, claiming bias due to prior criticisms and interruptions during a case management hearing.

AI Usage and Issues

The tribunal noted that the applicant had submitted motion materials generated with the assistance of a generative AI tool, which exhibited ‘hallucinating’ behavior—producing non-existent and misleading legal authorities. A chart detailing these inaccuracies was provided by the tribunal.

During a case management conference on November 30, 2025, the applicant acknowledged his reliance on the AI tool Grok for researching and drafting documents. He admitted to failing to sufficiently verify the accuracy of these materials, which contained numerous inaccuracies in citations, hyperlinks, and the application of tribunal rules. The applicant apologized, took responsibility, and committed to refraining from using generative AI for future materials.

Bias Motion Dismissed

The tribunal dismissed the applicant’s admissibility and bias motions due to a lack of basis. It upheld the admissibility of the Law Society’s documents, noting that the rules of evidence for interlocutory suspensions were less stringent than those in civil proceedings. The tribunal found no bias in the panel’s review and indicated that the applicant had mischaracterized the discussions that transpired during the case management hearing.

Implications of AI Use

The tribunal emphasized its independent interest in maintaining the integrity of its processes, stating that it would address the applicant’s use of AI regardless of his motions regarding admissibility and bias. The tribunal had not yet made determinations concerning the consequences of the applicant’s admission of AI usage and the submission of unverified materials.

Furthermore, the tribunal indicated that the errors arising from the applicant’s AI use could influence the legal considerations regarding his motion to vary or remove the interlocutory suspension and might affect cost determinations.

Conclusion

The tribunal provided the applicant with an opportunity to consult a lawyer and prepare for subsequent stages of the proceedings, highlighting the ongoing challenges and ethical questions surrounding the integration of AI in legal practices.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...