AI’s Impact on Life Sciences: Challenges and Opportunities

New Frontiers: How AI is Transforming the Life Sciences Industry – Patient, Commercial, and Regulatory Concerns

While the implementation of AI is growing rapidly, obstacles to deeper adoption remain. These pressure points are consistent across subsectors: protecting sensitive data; integrating tools with legacy systems; clarifying legal and IP risks; and turning governance policies into real-world practices.

Data Security Challenges

Data security tops the list of practical challenges, cited by 55 percent. The concern is clear: AI workflows often involve highly sensitive information—patient records, safety data, manufacturing parameters, and commercial strategy. Missteps can trigger regulatory scrutiny, legal liability, and reputational damage.

Security issues are made more complex by how AI systems aggregate data from many sources, move it across teams and borders, and sometimes introduce third-party platforms into the mix. As one healthcare provider executive states: “Sensitive information may be exposed to cyber threats. Given the sophisticated cyberattacks that we see today, we do not want to risk broader use of data.”

Rather than bolting on security as an afterthought, companies making steady progress tend to limit the volume of sensitive data in the first place. Common strategies include restricting how many systems a model touches, pulling only the fields needed, and masking data for experimentation. Encryption in transit and at rest is standard, but there is growing emphasis on minimizing duplicates and knowing exactly where third-party vendors store or access data.

Additional Hurdles

Security concerns sit alongside high costs (46 percent), legacy integration challenges (39 percent), scalability issues (38 percent), and skills gaps (38 percent) as day-to-day hurdles—and they are often intertwined. Older clinical and manufacturing systems were not designed for the volume and cadence of AI workflows, and connecting them safely takes time.

Indeed, integration can be difficult because many AI tools are incompatible with outdated infrastructure and systems, meaning organizations may have an AI tool on one hand and current infrastructure on the other, which cannot be easily bridged.

Moreover, the talent needed to stitch modern data tooling into regulated environments remains in short supply, compounding integration delays even when funding is available. “We’ve been struggling with skills gaps for completing AI-related projects,” notes the head of technology at an animal health company in India.

Legal and IP Concerns

Legal concerns are dominated by two issues: patient privacy and data protection (42 percent) and contractual/licensing risk (42 percent). The breakdown varies by subsector. Healthcare providers, for example, place far more weight on privacy (66 percent) than any other respondent type.

“If we are unable to protect patient data, we risk reputational damage,” says the COO of a healthcare provider. “Mitigating the risk of legal claims and settlements is important to avoid any financial pressure on the company.”

For pharma companies, this appears to be less of a concern because the use of AI in drug development involves mapping molecules and their mechanisms of action to try and identify targets. This means there are inherently fewer privacy and personal data protection issues for those organizations.

42 percent of respondents place patient privacy and data protection among their top two key legal risks relating to AI implementation.

Animal health companies are more likely to cite licensing risk (60 percent), which aligns with their broader use of third-party tools and reliance on data from dispersed clinics and farms. Medical device companies frequently highlight cross-border jurisdictional issues (40 percent) and licensing complexity (44 percent), given the multi-market nature of product development, field connectivity, and post-market surveillance.

These concerns are not theoretical. Many valuable AI inputs—chemistry datasets, proprietary models, third-party databases, and data sourced from contract research organizations (CROs)—are governed by restrictive contracts. Using them for training or fine-tuning without clear rights can lead to breach-of-contract claims, even when copyright law is less definitive.

“There could be the risk of using copyrighted materials for training AI,” says the director of innovation at a Taiwanese healthcare provider. “Developers who do not have complete knowledge of these issues may do so unknowingly.”

The IP Question

IP protection is also a grey area. While 31 percent of respondents are very concerned about potential IP infringement from using AI, another 51 percent are somewhat concerned. Just 18 percent are not worried. These views are fairly consistent across sectors.

Meanwhile, 60 percent of all respondents judge current protections for AI-assisted outputs to be weak, rising to 80 percent in animal health. In regional terms, in Asia-Pacific, the figure hits 85 percent, compared with 44 percent in EMEA. Uncertainty over who owns model-influenced designs or content, and whether those outputs meet patentability or authorship thresholds, is a recurring theme.

Enforcement uncertainty compounds the problem. When model-assisted content is shared across jurisdictions, companies face a patchwork of standards governing authorship, database rights, and inventorship, each of which can affect whether AI-influenced innovations can be protected or commercialized.

Governance, Training, and Board Oversight

Many companies are taking steps to improve oversight. A solid majority (63 percent) now have formal AI training programs in place, rising to 72 percent in human pharma. This trend is likely to accelerate.

Under the EU AI Act, companies that develop, deploy, or use high-risk AI systems—including many tools used in clinical decision-making, diagnostics, and other medical device software—must ensure that relevant personnel receive appropriate training.

Training must cover how the system works, the intended use, known limitations, and how to exercise meaningful human oversight, particularly where patient safety or product quality is at stake. This includes not only technical staff but also those involved in the use, supervision, and governance of AI systems. These requirements have been in effect since February 2025, meaning companies must act now to ensure compliance, especially those operating in EU markets or selling high-risk AI systems there.

The goal is to ensure that humans remain meaningfully involved and accountable when relying on complex or opaque systems. In practical terms, this means companies must formalize training programs, keep records of participation, and update materials in line with system changes or regulatory updates.

For multinational life sciences organizations— even if headquartered outside the EU— especially those marketing products in the EU, these training requirements are fast becoming non-negotiable. As a result, until changes are made to the AI Act, documented, role-specific training is shifting from best practice to regulatory obligation.

Human pharma also leads on broader governance. Nearly two-thirds (64 percent) report having an AI risk-management strategy, compared with 40 percent in devices. This reflects pharma’s more advanced use of AI in R&D and safety monitoring. Meanwhile, animal health firms report the highest incidence of AI-specific use policies (60 percent), driven by the fragmented nature of their clinical settings and data sources.

Board-level attention varies. Overall, 48 percent of respondents say AI is frequently discussed at the board level, but the figure rises to 64 percent in medical devices and 56 percent in human pharma. Only 32 percent of animal health companies and 30 percent of healthcare providers report the same. Regionally, North America leads (60 percent), followed by EMEA (47 percent) and Asia-Pacific (39 percent).

A vice president of a life sciences multinational notes: “AI is not a magic wand, so we’re careful about piloting and ensuring compliance, especially on privacy and regulatory fronts. Internally, we’ve got AI tools available across the business, and there are flagship AI projects led by our executive committee focused on simplification and optimization.”

Legal Uncertainty

There is also a pervasive sense that legal frameworks are lagging behind the rapid advancements in AI technology, leading to a complex landscape of compliance and risk management that companies must navigate.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...