AI Vendors Weigh in on Governance and Regulatory Issues
Health systems and startup companies are navigating complex governance and regulatory challenges as they integrate new artificial intelligence (AI) tools. A recent webinar organized by a consulting firm featured executives from startups focused on AI governance in health systems. They highlighted that while many health systems have the expertise to monitor machine learning models, they often lack the necessary infrastructure to do so at scale.
Current State of AI Governance
The meeting commenced with a presentation on the current governance landscape by a partner from the consulting firm. It included discussions on stalled attempts at AI legislation in Congress and various models developed by associations to guide AI governance. For example, the National Association of Insurance Commissioners has released a model bulletin regarding payers’ use of AI, which many states have adopted. This bulletin outlines how different payers engage with AI and establish their governance processes.
Furthermore, the Joint Commission and the Coalition for Health AI have proposed guidance on adopting best practices for AI in healthcare. This guidance encompasses recommendations on policies and governance structures, patient privacy, data security, risk assessments, and education on AI usage. It also suggests that healthcare providers include specific provisions in contracts with third-party vendors to comply with data security standards and responsibilities for ongoing monitoring.
Reporting and Best Practices
To enhance safety, healthcare organizations are encouraged to implement a process for voluntary, confidential reporting of AI safety events to relevant organizations. The guidance also emphasizes best practices in AI governance, such as risk-based management of third parties and assessment protocols for both internally developed and purchased AI tools.
Challenges in Implementation
One panelist noted that a limited percentage of hospital systems possess the resources to develop a comprehensive, real-time monitoring system for AI tools. The initial response from hospitals to the first AI guideline release was overwhelmingly cautious, citing the significant effort required for monitoring. Most hospitals are currently focusing on low-risk AI applications, such as chart reviews and radiology triage, which involve human oversight to ensure quality control.
Over the next several years, it is expected that the use cases for AI in healthcare will expand as trust in AI’s capabilities grows, and more clinical benefits are identified. Startups are working to create infrastructures that enable healthcare organizations and AI developers to collaborate effectively, providing structured assessments and continuous monitoring.
Regulatory Considerations
Recent discussions also touched on proposed legislation, such as the SANDBOX Act, which would create a regulatory sandbox for AI innovations, allowing companies to request waivers for certain regulations. However, panelists expressed skepticism about this approach, suggesting that it might undermine trust in innovations rather than foster it. They emphasized the importance of giving organizations the tools they need to confidently assess AI products.
In conclusion, the landscape of AI in healthcare is evolving, with a pressing need for robust governance frameworks and regulatory clarity. As stakeholders work to address these challenges, there remains a significant opportunity for innovation and improvement in patient care through AI technologies.