Challenges of Implementing Regulated AI in Drug Development

This AI Compliance CEO Underscores That Deploying Regulated AI is ‘Incredibly Difficult’

The FDA hails the recent rollout of its internal AI tool, Elsa, as a major move to tackle the crushing weight of regulatory review where documents thousands of pages long are commonplace. However, reports of a rushed, buggy rollout suggest that the agency may face significant challenges as it builds out the necessary infrastructure.

One regulated AI expert notes, “I think it’s just really, really hard to make regulated AI work well.” This statement encapsulates the complexity involved in integrating AI within regulatory frameworks, particularly given the scale and intricacy of the required documentation.

Beyond the Context Window

The issue is not merely the size of the documents but also the fundamental architecture of the AI system itself. Early reports suggest that Elsa is largely based on a large language model (LLM). However, a more resilient strategy may involve a neuro-symbolic framework, which combines the pattern-recognition power of modern neural networks with the structured, rule-based logic of traditional symbolic AI. This hybrid approach could effectively break down the monolithic review process into a series of smaller, verifiable steps, much like a flowchart, allowing generative AI to execute specific, smaller-context tasks effectively.

Without this structured approach, even the most sophisticated LLMs can become overwhelmed by the interconnectedness and complexity of regulatory documents, where information is scattered across thousands of pages and every detail must be traceable to its source.

The Documentation Deluge

Developing regulated products is inherently complex. To illustrate the difference between typical AI applications and regulated environments, consider a journalist’s workflow, which typically involves 10 to 20 steps from interview to publication. In contrast, the process of developing a drug, from initial discovery to regulatory approval and manufacturing, involves significantly more complexity.

This complexity is similarly evident in the medical device industry, where the common pathway, the 510(k) process, requires proving “substantial equivalence” to a predicate device. Every decision, from design to testing, generates a branching path of documentation requirements.

Data from McKinsey’s Numetrics R&D Analytics report highlights that between 2006 and 2016, the complexity of medical device software grew at a 32% compound annual growth rate while productivity increased at only 2% CAGR.

A Human-AI Combination Lock

The FDA is grappling with significant workforce challenges, with recent reductions affecting multiple centers and exacerbating ongoing staffing pressures. Attrition at the agency has hovered around 13% since fiscal 2018, resulting in reviewers struggling to keep pace with the sheer volume of information.

Rather than relying solely on large language models, which process text sequentially through pattern recognition, a combination with symbolic AI may prove beneficial. Symbolic AI employs explicit if-then rules to navigate complex decision trees, akin to a chess player who knows every possible move and strategy beforehand.

Such complex rules can be instrumental in utilizing AI in regulatory science, creating validated AI agents that complement the work of humans—whether at the agency or in industry. For instance, at the end of a pharmaceutical production line, a quality control machine tests every vial to determine its quality. Previously, a person with a master’s degree or Ph.D. would have checked each vial, but this method is inefficient, leading to the creation of automated testing machines that often leverage machine learning or AI.

Toward Accountable Autonomy

As document complexity continues to outpace human review capacity, the FDA may need to evolve its approach. The agency, along with others working on AI in a regulatory context, will likely need to merge neural and neuro-symbolic methodologies. The industry goal is achieving accountable autonomy, meaning AI can operate independently but within clearly defined boundaries.

The potential of AI lies in its ability to offload complexity, but only if we can trace its decisions, validate its actions, and ensure its safety at every step. When implemented alongside the traceability infrastructure and validation protocols required in regulated industries, AI systems have demonstrated dramatic efficiency gains. For instance, one organization has reduced its software release process from a year-long endeavor to just a week.

Ultimately, the challenge extends beyond immediate efficiency gains. As the floodgates open to the possibilities AI presents, it is clear that many manual processes could be automated. The fact that there is not better medicine largely stems from the overwhelming volume of paperwork involved in these processes.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...