AI Governance Challenges in Healthcare Innovation

AI Vendors Weigh in on Governance and Regulatory Issues

Health systems and startup companies are navigating complex governance and regulatory challenges as they integrate new artificial intelligence (AI) tools. A recent webinar organized by a consulting firm featured executives from startups focused on AI governance in health systems. They highlighted that while many health systems have the expertise to monitor machine learning models, they often lack the necessary infrastructure to do so at scale.

Current State of AI Governance

The meeting commenced with a presentation on the current governance landscape by a partner from the consulting firm. It included discussions on stalled attempts at AI legislation in Congress and various models developed by associations to guide AI governance. For example, the National Association of Insurance Commissioners has released a model bulletin regarding payers’ use of AI, which many states have adopted. This bulletin outlines how different payers engage with AI and establish their governance processes.

Furthermore, the Joint Commission and the Coalition for Health AI have proposed guidance on adopting best practices for AI in healthcare. This guidance encompasses recommendations on policies and governance structures, patient privacy, data security, risk assessments, and education on AI usage. It also suggests that healthcare providers include specific provisions in contracts with third-party vendors to comply with data security standards and responsibilities for ongoing monitoring.

Reporting and Best Practices

To enhance safety, healthcare organizations are encouraged to implement a process for voluntary, confidential reporting of AI safety events to relevant organizations. The guidance also emphasizes best practices in AI governance, such as risk-based management of third parties and assessment protocols for both internally developed and purchased AI tools.

Challenges in Implementation

One panelist noted that a limited percentage of hospital systems possess the resources to develop a comprehensive, real-time monitoring system for AI tools. The initial response from hospitals to the first AI guideline release was overwhelmingly cautious, citing the significant effort required for monitoring. Most hospitals are currently focusing on low-risk AI applications, such as chart reviews and radiology triage, which involve human oversight to ensure quality control.

Over the next several years, it is expected that the use cases for AI in healthcare will expand as trust in AI’s capabilities grows, and more clinical benefits are identified. Startups are working to create infrastructures that enable healthcare organizations and AI developers to collaborate effectively, providing structured assessments and continuous monitoring.

Regulatory Considerations

Recent discussions also touched on proposed legislation, such as the SANDBOX Act, which would create a regulatory sandbox for AI innovations, allowing companies to request waivers for certain regulations. However, panelists expressed skepticism about this approach, suggesting that it might undermine trust in innovations rather than foster it. They emphasized the importance of giving organizations the tools they need to confidently assess AI products.

In conclusion, the landscape of AI in healthcare is evolving, with a pressing need for robust governance frameworks and regulatory clarity. As stakeholders work to address these challenges, there remains a significant opportunity for innovation and improvement in patient care through AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...