AI Accountability: Who Bears the Burden of Algorithmic Errors?

AI Governance and Liability: Who Owns the Algorithm’s Mistake?

In a matter of months, generative artificial intelligence (AI) has shifted from an emerging technology to a mainstream business tool, with applications such as ChatGPT driving widespread adoption across industries. As organizations increasingly experiment with AI to enhance efficiency and decision-making, regulators around the world are working to balance innovation with accountability.

The rapid pace of deployment raises complex questions about governance, oversight, and, critically, liability. When an algorithm makes a mistake, who is responsible—the developer or the deployer? This article examines how existing and proposed legal frameworks address that question and how Canadian law is adapting to the realities of autonomous decision-making.

A. (Lack of) AI Regulation

Canada’s proposed Artificial Intelligence and Data Act (AIDA), which was once expected to become the country’s first comprehensive AI law, has effectively died following the recent federal election. As of December 2025, Canada does not have a single, overarching statute dedicated to the regulation of AI. In the absence of comprehensive AI legislation, oversight of AI technologies has fallen to existing laws, particularly privacy statutes, which now serve as an imperfect proxy for AI regulation.

The Honourable Evan Solomon, Canada’s first Minister of Artificial Intelligence and Digital Innovation, recently emphasized the federal government’s continued commitment to fostering innovation, noting that “AI is a powerful driver of growth, productivity, and innovation. It’s key to making sure Canada not only competes—but leads—on the global stage.”

While the creation of a dedicated ministry signals Canada’s recognition of AI’s transformative potential, the absence of a clear legislative framework leaves legal professionals and policymakers to rely on a patchwork of existing statutes to address emerging risks. Until a comprehensive regulation is enacted, questions of accountability, privacy, and ethical use will continue to be addressed by laws not designed for the age of AI.

B. Liability from the Use of AI

As AI technology transforms industries (from finance and healthcare to legal services and beyond), organizations are increasingly grappling with how traditional doctrines of negligence, product liability, and fiduciary duty apply to the deployment and use of AI. Organizations deploying AI can generally expect to bear responsibility for the outcomes of its use, underscoring the importance of careful governance, risk management, and contractual safeguards.

In Moffatt v. Air Canada, a customer seeking bereavement fares relied on an AI chatbot embedded in Air Canada’s website and was incorrectly informed that discounted tickets were available. Acting on this misinformation, the customer purchased tickets at full price. When the error was raised at the BC Civil Resolution Tribunal, Air Canada disclaimed responsibility, contending the chatbot operated independently and that it could not be held liable for its “agent’s” representations. The Tribunal rejected Air Canada’s defense and found the airline liable for negligent misrepresentation.

This decision underscores that an organization cannot escape liability by characterizing AI technology as autonomous entities detached from their own operations. In contexts where AI makes incorrect predictions or decisions, legal liability may ultimately rest with the deployer of the AI technology.

In another case, the Information and Privacy Commissioner of Ontario found that McMaster University’s use of an AI-enabled proctoring tool breached the applicable privacy statute. The university failed to provide adequate notice to students about the personal information being captured during exams. This ruling emphasizes that organizations deploying AI must rigorously assess data collection and ensure that end-users receive clear, informed notice.

C. Contracting for AI

To mitigate some of the risks discussed above, organizations should thoroughly examine AI-vendor contracts to ensure that critical issues such as privacy, data governance, regulatory compliance, and liability allocation are addressed in clear, enforceable terms.

When contracting for AI solutions, purchasers must ensure agreements clearly address data ownership and privacy obligations, delineate intellectual property rights, and appropriately allocate liability for algorithmic errors or data breaches. Governance provisions, such as audit rights and change management protocols, help safeguard operational continuity and legal compliance.

Key contract terms for organizations to consider include:

  • Intellectual property ownership
  • Data privacy and security obligations
  • Regulatory compliance and certification
  • Liability, indemnity, and insurance
  • Performance and service level agreements
  • Transparency and ethical safeguards
  • Training and change management
  • Termination and transition rights

D. Conclusion

As AI technology reshapes the business landscape, the need has never been greater for organizations to establish a resilient legal framework governing its use. The regulation of AI remains a patchwork of outdated statutes and non-binding policy principles. Organizations adopting AI must take a cautious and informed approach.

Ultimately, liability may fall on the deployer of the technology in the event of non-compliance. By building a principle- and risk-based AI compliance framework and conducting privacy and ethics impact assessments, organizations can maximize the benefits of AI while mitigating inherent risks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...