AI Autonomy and Defamation: Legal Risks Ahead

When AI Speaks for Itself: How AI is Reshaping Defamation Risk

Consider this: an artificial intelligence (AI) agent acting autonomously (i.e., a “human in the loop” process is not used) on a social media platform publishes a disparaging post attacking the character of an individual.

This is not science fiction. It is instead a recent example of an AI agent creating content that would ordinarily result in defamation exposure. Other examples include AI agents “hallucinating” and incorrectly linking individuals to crime, fraud, terrorism, or other gross misconduct. These examples increase the concern that the use of AI agents may result in claims of defamation, and AI defamation cases may soon commence in Canada.

AI Agents and the Rise of AI-Related Defamation

Typically, an AI tool depends on user-generated input or prompts that ask the AI tool to perform tasks and generate output. AI agents, on the other hand, may act autonomously on behalf of a user to perform certain tasks. AI agents are capable of collecting outside data and then automating, predicting, and performing tasks to achieve a user’s objectives.

However, there are a growing number of cases where the AI agent acts autonomously contrary to those objectives, including by retaliating against a user. For example, an AI agent that suggests fixes to computer coding autonomously wrote and published a harmful blog post defaming a computer programmer. The harmful blog was generated after the AI agent’s suggestions were rejected by the programmer, which led to the AI agent sending the programmer the link to the disparaging blog.

This incident, which has recently gone viral, underscores a troubling reality: without adequate safeguards and oversight (such as the use of a “human in the loop” process), AI agents can autonomously generate and publish problematic content which may expose those responsible for the AI system to legal risk.

Liability for those that Deploy AI Agents

Generally, AI agents themselves should not be held liable for their outputs because they are not legal persons and cannot compensate those that have been wronged. However, the companies or individuals who design, deploy, or control these systems may be liable for the wrongful acts of the AI agents they are responsible for.

In Moffatt v Air Canada, 2024 BCCRT 149, the British Columbia Small Claims Court held that Air Canada was legally responsible for the representations made by its chatbot on its website. In this case, the chatbot on Air Canada’s website gave incorrect information that was relied upon. The Court held that Air Canada was responsible for the content of its website even if the chatbot has an interactive component and that Air Canada was liable for negligent misrepresentation. The Air Canada decision is significant as the company was found legally responsible for the representations made by chatbots hosted on its website.

Defamation in Canada and Risk for those that Deploy AI

Under Canadian law, a claim for defamation requires that (1) the words in question are defamatory (i.e., the words tend to lower the plaintiff’s reputation in the eyes of a reasonable person); (2) that the words referred to the plaintiff; and (3) that the words were communicated to at least one person other than the plaintiff. It is not necessary for a plaintiff to prove that the defendant was careless or intended to cause harm.

Publishers of defamatory content can be found liable for defamation as well if they aided, assisted, and advised in the publication of the defamation. Canadian courts may hold a company or individual liable for defamatory comments made by their AI tools. A company or individual using an AI tool might be found to have aided, assisted, and advised an AI tool with its defamatory outputs either by giving an AI agent certain roles and objectives or by failing to input safeguards against hallucinations or defamatory outputs. Courts could find that the company or individual behind an AI tool is not merely a passive actor.

Companies that own, develop, or host AI tools may also face liability for the outputs of their AI tools. A court could extend the reasoning in Air Canada to the defamation context and find that an AI system is not a separate or distinct legal entity, meaning the company may be found to have published defamatory content where it designs and controls the safeguards, objectives, and constraints that shape the AI tool’s outputs. An individual might also face liability if they are more than a passive actor and they assign objectives or rules to an autonomous AI agent that make the AI agent more susceptible to generating and disseminating disparaging outputs.

Key Takeaways

  • Autonomous AI agents represent a new defamation risk. Unlike traditional chatbots, which have already been found in Canada to bind those that deploy them, these systems can independently gather information, form judgments, and publish content, sometimes with harmful consequences.
  • AI governance and oversight is essential. AI governance policies and monitoring of AI tool outputs and objectives is likely to grow in importance as AI tools advance and continue to “learn” and adapt over time.
  • Autonomy increases publication risk. The more discretion an AI agent is given to collect information, draw conclusions, and publish content, the harder it will be for a controller to characterize its role as merely passive. Courts may view poorly constrained objectives or inadequate safeguards as contributing to publication.
  • At least one Canadian court has rejected the notion that liability can be avoided by claiming that AI tools are separate and independent entities (Air Canada).
  • Companies that offer AI tools should with the assistance of counsel review their contractual protections, including whether their terms and conditions appropriately limit or exclude liability arising from the use of those tools.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...