Experts Call for Urgent Action on AI Regulation in Canada

‘The Trust Gap is a Real One’: Experts Advise Feds to Course Correct on AI

Federal Members of Parliament (MPs) are currently grappling with the complexities of regulating artificial intelligence (AI). This involves weighing the potential threats AI poses to Canadian jobs, cybersecurity, and data sovereignty, alongside its economic opportunities.

Study Launched by the Standing Committee on Industry and Technology

At the behest of the Bloc Québécois, the Standing Committee on Industry and Technology has initiated a study to explore how AI should be regulated, considering a multitude of factors. The issues at hand include:

  • The scientific challenge of ensuring AI models are honest.
  • The legal question surrounding data sovereignty.
  • The political question of AI multilateralism.
  • The immediate debate regarding the establishment of a dedicated AI committee to monitor the fast-evolving technology.

Call for Public Consultation

Experts have responded positively to the idea of creating a government AI committee, yet a recurring theme has been the federal government’s failure to engage in adequate public consultation on AI issues. Colin Bennett, a professor emeritus at the University of Victoria, stated, “The trust gap is a real one, and it needs to be closed.”

A recent KPMG study revealed that while 50% of Canadians surveyed approve of or accept the use of AI, nearly 80% express concerns regarding potential negative outcomes. Furthermore, 75% believe that AI regulation is necessary.

Critique of Recent AI Consultations

The “what we heard” report produced by Canada’s AI ministry, which utilized AI to analyze responses, was criticized for lacking genuine insight. Michael Geist, Canada Research Chair in Internet and E-Commerce Law, noted that the consultations felt more like “what we wanted you to think we heard.” He emphasized that “if we’re going to have confidence in these consultations,” they must extend beyond a mere 30-day effort.

Recommendations for Improved Engagement

Bennett and Professor Yoshua Bengio from the University of Montreal advocated for the use of citizen assemblies to gather feedback from Canadians who will be affected by AI technologies. They argue that AI’s pervasive nature necessitates a fundamentally different consultation process.

Data Sovereignty Concerns

Discussions also centered around ensuring Canadian sovereignty over its data. Geist suggested that robust privacy laws should be prioritized, rather than merely focusing on Canadian ownership of companies. He pointed out that smaller, Canadian-owned firms often lack the capacity to provide necessary services at scale, leading to potential vulnerabilities.

Geist elaborated, “As long as a company has connections to a foreign country, such as the United States, Canadian privacy laws may not be sufficient to guarantee their application.”

The Challenges of Autonomous AI

The meeting also addressed concerns regarding “agentic” or autonomous AI. Bengio highlighted that current AI systems are “not reliable and trustworthy” due to their training methods, which aim to imitate human behavior. He added that these systems often remain opaque, making it impossible for companies to guarantee their intended behavior.

Privacy Issues and Current Investigations

Bennett pointed out ongoing investigations by the Office of the Privacy Commissioner into various AI applications, emphasizing the need for a healthy skepticism towards emerging technologies. Notable cases include:

  • Investigation of ChatGPT for the non-consensual use of Canadians’ personal data.
  • Concerns regarding xAI’s Grok related to the display and sharing of inappropriate images.
  • Scraping of images by Clearview AI for facial recognition systems used by law enforcement.

Conclusion: A Call for Multilateralism

The AI regulation study was initiated by the Bloc Québécois and has involved input from a diverse range of experts. Bengio emphasized the importance of multilateralism, arguing that Canada should collaborate with like-minded middle powers to establish AI guidelines that reflect shared values and concerns.

He concluded that Canada must strive to lead in safe, competent AI development to ensure it remains a player on the global stage rather than becoming an afterthought. Alongside national laws and international treaties, it is crucial for researchers to design AI that adheres to legal and moral standards.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...