Challenges of Implementing Regulated AI in Drug Development

This AI Compliance CEO Underscores That Deploying Regulated AI is ‘Incredibly Difficult’

The FDA hails the recent rollout of its internal AI tool, Elsa, as a major move to tackle the crushing weight of regulatory review where documents thousands of pages long are commonplace. However, reports of a rushed, buggy rollout suggest that the agency may face significant challenges as it builds out the necessary infrastructure.

One regulated AI expert notes, “I think it’s just really, really hard to make regulated AI work well.” This statement encapsulates the complexity involved in integrating AI within regulatory frameworks, particularly given the scale and intricacy of the required documentation.

Beyond the Context Window

The issue is not merely the size of the documents but also the fundamental architecture of the AI system itself. Early reports suggest that Elsa is largely based on a large language model (LLM). However, a more resilient strategy may involve a neuro-symbolic framework, which combines the pattern-recognition power of modern neural networks with the structured, rule-based logic of traditional symbolic AI. This hybrid approach could effectively break down the monolithic review process into a series of smaller, verifiable steps, much like a flowchart, allowing generative AI to execute specific, smaller-context tasks effectively.

Without this structured approach, even the most sophisticated LLMs can become overwhelmed by the interconnectedness and complexity of regulatory documents, where information is scattered across thousands of pages and every detail must be traceable to its source.

The Documentation Deluge

Developing regulated products is inherently complex. To illustrate the difference between typical AI applications and regulated environments, consider a journalist’s workflow, which typically involves 10 to 20 steps from interview to publication. In contrast, the process of developing a drug, from initial discovery to regulatory approval and manufacturing, involves significantly more complexity.

This complexity is similarly evident in the medical device industry, where the common pathway, the 510(k) process, requires proving “substantial equivalence” to a predicate device. Every decision, from design to testing, generates a branching path of documentation requirements.

Data from McKinsey’s Numetrics R&D Analytics report highlights that between 2006 and 2016, the complexity of medical device software grew at a 32% compound annual growth rate while productivity increased at only 2% CAGR.

A Human-AI Combination Lock

The FDA is grappling with significant workforce challenges, with recent reductions affecting multiple centers and exacerbating ongoing staffing pressures. Attrition at the agency has hovered around 13% since fiscal 2018, resulting in reviewers struggling to keep pace with the sheer volume of information.

Rather than relying solely on large language models, which process text sequentially through pattern recognition, a combination with symbolic AI may prove beneficial. Symbolic AI employs explicit if-then rules to navigate complex decision trees, akin to a chess player who knows every possible move and strategy beforehand.

Such complex rules can be instrumental in utilizing AI in regulatory science, creating validated AI agents that complement the work of humans—whether at the agency or in industry. For instance, at the end of a pharmaceutical production line, a quality control machine tests every vial to determine its quality. Previously, a person with a master’s degree or Ph.D. would have checked each vial, but this method is inefficient, leading to the creation of automated testing machines that often leverage machine learning or AI.

Toward Accountable Autonomy

As document complexity continues to outpace human review capacity, the FDA may need to evolve its approach. The agency, along with others working on AI in a regulatory context, will likely need to merge neural and neuro-symbolic methodologies. The industry goal is achieving accountable autonomy, meaning AI can operate independently but within clearly defined boundaries.

The potential of AI lies in its ability to offload complexity, but only if we can trace its decisions, validate its actions, and ensure its safety at every step. When implemented alongside the traceability infrastructure and validation protocols required in regulated industries, AI systems have demonstrated dramatic efficiency gains. For instance, one organization has reduced its software release process from a year-long endeavor to just a week.

Ultimately, the challenge extends beyond immediate efficiency gains. As the floodgates open to the possibilities AI presents, it is clear that many manual processes could be automated. The fact that there is not better medicine largely stems from the overwhelming volume of paperwork involved in these processes.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...