Holes in Federal AI Healthcare Regulation Should Be Patched
As the use of artificial intelligence becomes more prevalent in healthcare settings, experts are calling for strengthened regulations surrounding the technology.
The Growing Role of AI in Healthcare
In recent years, AI has emerged as a popular tool in both medical research and clinical environments. Experts from various institutions have voiced concerns about the current regulatory frameworks surrounding AI technology and offered potential solutions to address the gaps.
Current FDA Approval Pathways
Before new medical devices are marketed or distributed, they must be deemed “safe and effective” by the Food and Drug Administration (FDA). However, current FDA approval pathways, such as the 510(k) pathway, were developed decades ago for traditional medical devices and do not adequately apply to modern AI-type devices. According to experts, this pathway grants clearance to approximately 98% of all AI-enabled devices.
Medical school internist and National Clinician Scholars Program fellow Kaustav Shah explained that the 510(k) pathway is intended for devices similar to preexisting market options that have already proven to be safe and effective. This means that the amount of data or evidence required is not as robust as what would be necessary for new devices.
Concerns About AI Device Performance
Experts like Gary Weissman, an assistant professor of pulmonary and critical care medicine, have expressed concerns about the limitations of the current regulatory process. Weissman stated that the existing framework has significant “holes” in it, particularly regarding the changing performance of AI devices over time and their variability between different locations. This phenomenon is often referred to as “shift” or “drift.”
The Need for a Robust Regulatory Framework
Eric Bressman, assistant professor of medicine in the division of hospital medicine and senior fellow at the Leonard Davis Institute, emphasized the necessity for a more expansive and robust regulatory framework. He proposed a pathway that requires AI systems to be monitored before they can receive clearance, incorporating a “supervisory model” that leads to graduated autonomy.
Bressman’s model resembles the training process that humans undergo as they approach independent practice, fostering a safer environment for AI implementation.
Institutional Governance and Local Regulation
Shah argued that institutions should adopt their own AI governance processes to determine the appropriate level of evaluation, rigour, and transparency. These processes should be established regardless of federal or state requirements.
On a broader scale, Bressman advocates for an FDA-centered framework that includes partnerships with state medical boards. He also stresses the importance of local regulation to keep pace with rapid technological advancements, noting that the federal government may not be equipped to handle every update or new market introduction.
Legislative Initiatives
In Pennsylvania, House Bill 1925 has been introduced to require insurers, hospitals, and clinicians to provide transparency to patients regarding AI usage. The bill aims to ensure that an “ultimate individualized assessment” is conducted by a human and mandates evidence that bias and discrimination, already prohibited by law, have been minimized in the use of AI.
Weissman commended the bill as a good start but pointed out that it does not address device regulation in decision support systems, which remains a critical gap.