Understanding the European AI Act and Data Protection for Life Sciences Companies
Life science companies are facing unique challenges in complying with the European Artificial Intelligence Act, which is the first binding regulation on AI globally. This act, established in June 2024, aims to ensure that AI technologies operate safely and ethically within the European Union (EU). The act came into force on August 1, 2024, and applies to all 27 EU member states.
Scope and Classification of AI Systems
The AI Act categorizes AI systems into four risk levels: unacceptable-risk, high-risk, limited-risk, and minimal-risk. Unacceptable systems are outright prohibited and must be phased out within six months. Provisions for high-risk, limited-risk, and minimal-risk systems will be enforced 24 to 36 months after enactment.
High-risk AI systems include those that could potentially impact health, safety, or fundamental rights, such as medical devices. Providers of these systems must undergo a conformity assessment before their products can be sold within the EU.
Intersection with GDPR
The AI Act intersects significantly with the General Data Protection Regulation (GDPR), which mandates that businesses process data responsibly, particularly when high risks to individual rights are involved. Companies in both the EU and the US must submit a Data Protection Impact Assessment (DPIA) when data processing poses significant risks.
Many principles of the AI Act echo those of the GDPR, allowing life sciences companies to leverage their existing compliance frameworks to meet new AI regulations.
Complying with the AI Act
To comply with the AI Act, organizations should:
- Map existing AI systems and classify them according to the act’s risk categories.
- Implement measures to ensure staff are adequately trained in AI literacy by February 2, 2025.
The Act defines an AI system as a machine-based system designed to operate autonomously and adaptively, generating outputs that can influence physical or virtual environments.
Research Exemption and Real-World Evidence
The AI Act includes a research exemption, stating that it does not apply to AI systems developed solely for scientific research. However, there is ambiguity regarding its scope, particularly for clinical trials and drug discovery, which likely fall under this exemption, while commercial research may not.
Additionally, the use of AI for real-world evidence—data not collected specifically for research but used for secondary purposes—poses compliance questions. Many real-world applications may qualify as research, but businesses must assess each case to determine AI Act applicability.
Risk Assessment, Explainability, and Accountability
Developers of high-risk AI systems are required to conduct thorough risk assessments. This assessment must extend beyond data protection to evaluate potential harms caused by AI solutions. Developers must also ensure their systems are explainable, providing clarity on how decisions are made with AI.
Good practices and accountability measures are essential, particularly in healthcare, where AI systems must integrate seamlessly with existing electronic health records to maintain accurate and traceable records of advice given.
Data Protection Impact Statements
As companies navigate the complexities of GDPR, conducting a DPIA is crucial for identifying risks associated with data processing activities. A DPIA should be conducted throughout the development process, allowing organizations to proactively address potential issues rather than retrofitting solutions after deployment.
Through effective assessments, companies can enhance their products and communications with users, ensuring that data flows are secure and that individuals feel in control of their information.
Conclusion: The evolving landscape of AI regulations in Europe presents both challenges and opportunities for life sciences companies. By understanding the implications of the AI Act and GDPR, organizations can better prepare to meet regulatory requirements and innovate responsibly.