Transforming Health-AI Regulation for Safer Innovation

CIHR Funds Law Dean for Global Project on Safe Health-AI Regulation

Dean Colleen M. Flood has received $355,724 from the Canadian Institutes of Health Research (CIHR) for a global project aimed at examining how medical device regulation can better support the safe and effective use of artificial intelligence (AI) in health care.

The project is co-led by:

  • Professor Catherine Régis (Université de Montréal)
  • Professor Anna Goldenberg (The Hospital for Sick Children/University of Toronto)
  • Dr. Devin Singh (The Hospital for Sick Children)
  • Professor Teresa Scassa (University of Ottawa)

Project Overview

Flood emphasizes that Health-AI has the potential to radically improve access to care, enhance quality, and support more efficient and equitable health systems. However, realizing that promise depends on addressing real and evolving risks to ensure Canadians receive safe, high-quality care.

The project, titled “Optimizing Medical Device Regulation of Artificial Intelligence,” is a four-year study focusing on how Canada’s medical device framework can evolve to respond to rapidly advancing technologies, particularly machine learning and generative AI.

Regulatory Challenges

While Health Canada has taken important steps to modernize oversight, the pace and adaptability of regulation are critical. The project will emphasize learning from other jurisdictions to identify agile regulatory approaches that can respond to evolving AI systems, which perform differently across clinical settings and introduce new forms of risk.

Flood, a leading scholar in health law and policy, notes that AI challenges traditional regulatory assumptions about static technologies. “Ensuring patient safety, public trust, and sustained innovation requires regulatory approaches that can adapt alongside technological change,” she states.

Comparative Analysis

The project will analyze and compare regulatory frameworks in:

  • Canada
  • The United States
  • The United Kingdom
  • The European Union
  • Australia
  • Brazil
  • Nigeria

Working with a global network of researchers, regulators, patient groups, Indigenous communities, and health professional organizations, the team aims to develop model laws and regulatory tools that protect patients while supporting responsible innovation. A public, online evidence base will track safety issues and regulatory responses throughout and beyond the project.

Vision for Canada

The ultimate goal is to position Canada as a global leader not only in health-AI innovation but also in the regulatory approaches that make such innovation safe, trustworthy, and scalable. Flood explains, “Innovation and regulation are interdependent — and Canada’s success depends on advancing both together.”

Upcoming Public Talk

Dean Flood will give a public talk titled “Machine M.D.: The Governance of Health-Related AI” on February 12 from 12–2 pm in Robert Sutherland Hall. This event is hosted by Queen’s School of Policy Studies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...