Lawyer Admits to Using AI, Insufficiently Verifying Output in Law Society Tribunal Hearing
The Law Society of Ontario Tribunal’s Hearing Division addressed significant issues stemming from a lawyer’s use of generative artificial intelligence (AI) during proceedings related to the suspension of his license. This case marks a pivotal moment as it is described as the first reported instance in Canada where a lawyer submitted problematic AI-generated materials in a disciplinary hearing.
Case Background
In the case of Mazaheri v Law Society of Ontario, the applicant, a lawyer whose license was suspended on an interlocutory basis, moved on November 12, 2024, to either vary or remove this suspension. He filed a motion challenging the admissibility of evidence provided by the Law Society of Ontario and sought to recuse the tribunal panel members, claiming bias due to prior criticisms and interruptions during a case management hearing.
AI Usage and Issues
The tribunal noted that the applicant had submitted motion materials generated with the assistance of a generative AI tool, which exhibited ‘hallucinating’ behavior—producing non-existent and misleading legal authorities. A chart detailing these inaccuracies was provided by the tribunal.
During a case management conference on November 30, 2025, the applicant acknowledged his reliance on the AI tool Grok for researching and drafting documents. He admitted to failing to sufficiently verify the accuracy of these materials, which contained numerous inaccuracies in citations, hyperlinks, and the application of tribunal rules. The applicant apologized, took responsibility, and committed to refraining from using generative AI for future materials.
Bias Motion Dismissed
The tribunal dismissed the applicant’s admissibility and bias motions due to a lack of basis. It upheld the admissibility of the Law Society’s documents, noting that the rules of evidence for interlocutory suspensions were less stringent than those in civil proceedings. The tribunal found no bias in the panel’s review and indicated that the applicant had mischaracterized the discussions that transpired during the case management hearing.
Implications of AI Use
The tribunal emphasized its independent interest in maintaining the integrity of its processes, stating that it would address the applicant’s use of AI regardless of his motions regarding admissibility and bias. The tribunal had not yet made determinations concerning the consequences of the applicant’s admission of AI usage and the submission of unverified materials.
Furthermore, the tribunal indicated that the errors arising from the applicant’s AI use could influence the legal considerations regarding his motion to vary or remove the interlocutory suspension and might affect cost determinations.
Conclusion
The tribunal provided the applicant with an opportunity to consult a lawyer and prepare for subsequent stages of the proceedings, highlighting the ongoing challenges and ethical questions surrounding the integration of AI in legal practices.