Strengthening AI Regulations in Healthcare

Holes in Federal AI Healthcare Regulation Should Be Patched

As the use of artificial intelligence becomes more prevalent in healthcare settings, experts are calling for strengthened regulations surrounding the technology.

The Growing Role of AI in Healthcare

In recent years, AI has emerged as a popular tool in both medical research and clinical environments. Experts from various institutions have voiced concerns about the current regulatory frameworks surrounding AI technology and offered potential solutions to address the gaps.

Current FDA Approval Pathways

Before new medical devices are marketed or distributed, they must be deemed “safe and effective” by the Food and Drug Administration (FDA). However, current FDA approval pathways, such as the 510(k) pathway, were developed decades ago for traditional medical devices and do not adequately apply to modern AI-type devices. According to experts, this pathway grants clearance to approximately 98% of all AI-enabled devices.

Medical school internist and National Clinician Scholars Program fellow Kaustav Shah explained that the 510(k) pathway is intended for devices similar to preexisting market options that have already proven to be safe and effective. This means that the amount of data or evidence required is not as robust as what would be necessary for new devices.

Concerns About AI Device Performance

Experts like Gary Weissman, an assistant professor of pulmonary and critical care medicine, have expressed concerns about the limitations of the current regulatory process. Weissman stated that the existing framework has significant “holes” in it, particularly regarding the changing performance of AI devices over time and their variability between different locations. This phenomenon is often referred to as “shift” or “drift.”

The Need for a Robust Regulatory Framework

Eric Bressman, assistant professor of medicine in the division of hospital medicine and senior fellow at the Leonard Davis Institute, emphasized the necessity for a more expansive and robust regulatory framework. He proposed a pathway that requires AI systems to be monitored before they can receive clearance, incorporating a “supervisory model” that leads to graduated autonomy.

Bressman’s model resembles the training process that humans undergo as they approach independent practice, fostering a safer environment for AI implementation.

Institutional Governance and Local Regulation

Shah argued that institutions should adopt their own AI governance processes to determine the appropriate level of evaluation, rigour, and transparency. These processes should be established regardless of federal or state requirements.

On a broader scale, Bressman advocates for an FDA-centered framework that includes partnerships with state medical boards. He also stresses the importance of local regulation to keep pace with rapid technological advancements, noting that the federal government may not be equipped to handle every update or new market introduction.

Legislative Initiatives

In Pennsylvania, House Bill 1925 has been introduced to require insurers, hospitals, and clinicians to provide transparency to patients regarding AI usage. The bill aims to ensure that an “ultimate individualized assessment” is conducted by a human and mandates evidence that bias and discrimination, already prohibited by law, have been minimized in the use of AI.

Weissman commended the bill as a good start but pointed out that it does not address device regulation in decision support systems, which remains a critical gap.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...