Challenges of Implementing Regulated AI in Drug Development

This AI Compliance CEO Underscores That Deploying Regulated AI is ‘Incredibly Difficult’

The FDA hails the recent rollout of its internal AI tool, Elsa, as a major move to tackle the crushing weight of regulatory review where documents thousands of pages long are commonplace. However, reports of a rushed, buggy rollout suggest that the agency may face significant challenges as it builds out the necessary infrastructure.

One regulated AI expert notes, “I think it’s just really, really hard to make regulated AI work well.” This statement encapsulates the complexity involved in integrating AI within regulatory frameworks, particularly given the scale and intricacy of the required documentation.

Beyond the Context Window

The issue is not merely the size of the documents but also the fundamental architecture of the AI system itself. Early reports suggest that Elsa is largely based on a large language model (LLM). However, a more resilient strategy may involve a neuro-symbolic framework, which combines the pattern-recognition power of modern neural networks with the structured, rule-based logic of traditional symbolic AI. This hybrid approach could effectively break down the monolithic review process into a series of smaller, verifiable steps, much like a flowchart, allowing generative AI to execute specific, smaller-context tasks effectively.

Without this structured approach, even the most sophisticated LLMs can become overwhelmed by the interconnectedness and complexity of regulatory documents, where information is scattered across thousands of pages and every detail must be traceable to its source.

The Documentation Deluge

Developing regulated products is inherently complex. To illustrate the difference between typical AI applications and regulated environments, consider a journalist’s workflow, which typically involves 10 to 20 steps from interview to publication. In contrast, the process of developing a drug, from initial discovery to regulatory approval and manufacturing, involves significantly more complexity.

This complexity is similarly evident in the medical device industry, where the common pathway, the 510(k) process, requires proving “substantial equivalence” to a predicate device. Every decision, from design to testing, generates a branching path of documentation requirements.

Data from McKinsey’s Numetrics R&D Analytics report highlights that between 2006 and 2016, the complexity of medical device software grew at a 32% compound annual growth rate while productivity increased at only 2% CAGR.

A Human-AI Combination Lock

The FDA is grappling with significant workforce challenges, with recent reductions affecting multiple centers and exacerbating ongoing staffing pressures. Attrition at the agency has hovered around 13% since fiscal 2018, resulting in reviewers struggling to keep pace with the sheer volume of information.

Rather than relying solely on large language models, which process text sequentially through pattern recognition, a combination with symbolic AI may prove beneficial. Symbolic AI employs explicit if-then rules to navigate complex decision trees, akin to a chess player who knows every possible move and strategy beforehand.

Such complex rules can be instrumental in utilizing AI in regulatory science, creating validated AI agents that complement the work of humans—whether at the agency or in industry. For instance, at the end of a pharmaceutical production line, a quality control machine tests every vial to determine its quality. Previously, a person with a master’s degree or Ph.D. would have checked each vial, but this method is inefficient, leading to the creation of automated testing machines that often leverage machine learning or AI.

Toward Accountable Autonomy

As document complexity continues to outpace human review capacity, the FDA may need to evolve its approach. The agency, along with others working on AI in a regulatory context, will likely need to merge neural and neuro-symbolic methodologies. The industry goal is achieving accountable autonomy, meaning AI can operate independently but within clearly defined boundaries.

The potential of AI lies in its ability to offload complexity, but only if we can trace its decisions, validate its actions, and ensure its safety at every step. When implemented alongside the traceability infrastructure and validation protocols required in regulated industries, AI systems have demonstrated dramatic efficiency gains. For instance, one organization has reduced its software release process from a year-long endeavor to just a week.

Ultimately, the challenge extends beyond immediate efficiency gains. As the floodgates open to the possibilities AI presents, it is clear that many manual processes could be automated. The fact that there is not better medicine largely stems from the overwhelming volume of paperwork involved in these processes.

More Insights

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Dubai Culture Triumphs with Innovative AI Governance Framework

Dubai Culture & Arts Authority has won the Best AI Governance Framework of 2025 at the GovTech Innovation Forum & Awards for its AI-driven initiatives that enhance cultural accessibility. The...

Building Trust in AI Traffic Solutions

As artificial intelligence becomes integral to modern infrastructure, the EU AI Act establishes crucial standards for safety and accountability in its deployment, particularly in traffic management...

Federal Action on AI Regulation Gains Momentum After State Ban Fails

The failure of a proposal to block state-level regulation of artificial intelligence has sparked renewed calls for federal action, as advocates urge Congress to establish national AI rules for...

Federal Action on AI Regulation Gains Momentum After State Ban Fails

The failure of a proposal to block state-level regulation of artificial intelligence has sparked renewed calls for federal action, as advocates urge Congress to establish national AI rules for...

Transforming AI Regulation: The Philippine Approach to Governance

Representative Brian Poe has introduced the Philippine Artificial Intelligence Governance Act, aiming to regulate AI usage across various sectors to ensure safety and effectiveness. The legislation...

Harnessing Generative AI for Enhanced Risk and Compliance in 2025

In 2025, the demand for Generative AI in risk and compliance certification is surging as organizations face complex regulatory landscapes and increasing threats. This certification equips...

Turkey’s Grok Crackdown: A Warning for Global Tech Regulation

The July 2025 incident involving Turkey's investigation into Grok, an AI tool integrated into X (formerly Twitter), highlights the growing regulatory risks that AI-driven platforms face in politically...

Turkey’s Grok Crackdown: A Warning for Global Tech Regulation

The July 2025 incident involving Turkey's investigation into Grok, an AI tool integrated into X (formerly Twitter), highlights the growing regulatory risks that AI-driven platforms face in politically...