The EU AI Act and the GDPR: Collision or Harmony?
AI is the omnipresent buzzword in the legal industry. Consequently, the EU AI Act is one of the main items on the radar of many legal professionals. But what about this other piece of legislation that was a very hot topic back in 2018, the General Data Protection Regulation (GDPR)?
After over seven years, GDPR compliance has become familiar territory for many in-house counsel and other professionals. There are a number of similarities and interplays between the newly introduced AI Act and the GDPR, as well as explicit references to the GDPR in the AI Act. In this article, we will discuss some of the most significant interactions.
General Overview
It’s striking the extent to which the overall shape of the AI Act is influenced by the GDPR. Many of the data principles (transparency, accuracy, security) are mirrored in the AI Act which, like the GDPR, takes a risk-based approach.
The obvious intersection in terms of AI Act compliance is due to the fact that developing and using AI systems likely involves training AI models on data, some of which will almost certainly be personal data, which must be processed in compliance with the GDPR. This requires appropriate data privacy safeguards, not only internally, but also in the relationship with suppliers and other parties involved in the AI ecosystem.
Significantly, Article 47 of the AI Act requires providers of high-risk AI systems to draw up a declaration of conformity which must include a statement of GDPR compliance where the system includes personal data. In addition, in many Member States, the Data Protection Authority will also be the AI Act market surveillance authority.
Transparency
The GDPR and the AI Act each have their own transparency regimes. In both cases, the objective is that individuals need to receive adequate information on how their personal data is processed and how the use of AI affects them.
- Under the GDPR (Articles 12-14), data subjects must be given clear and accessible information about the processing of their personal data, including purposes, legal basis, recipients, storage periods, and data subject rights.
- Under the AI Act, several transparency obligations apply. For example, Article 13 requires that deployers are given instructions for the use of high-risk AI systems. In addition, Article 50 requires deployers to inform natural persons that they are interacting with an AI system (unless obvious from context).
Risk Management
From a risk-management perspective, the GDPR and the AI Act both employ a risk-based approach in terms of compliance. However, there is a fundamental difference between the two in terms of the stage at which risk is addressed.
- The GDPR is risk-based, requiring prior and ongoing assessment of risks associated with the processing of personal data. This sparks the need for technical and organizational measures to address risk proportionately.
- Under the AI Act, AI systems are categorized into different tiers of risk: unacceptable risk which is prohibited, high-risk, or low/minimal risk. The most onerous obligations are attached to high-risk AI systems, which are required to comply with comprehensive obligations relating to risk management, assessment, and mitigation.
Accountability
Both the GDPR and the AI Act require accountability. Broadly, this involves documenting various steps, processes, and policies to demonstrate compliance.
- Where the GDPR requires data processing agreements to be put in place between controllers and (sub-)processors, the AI Act requires sufficient contractual safeguards to be put in place between the various roles as defined in the AI Act.
- Whereas the GDPR requires the documentation of elements including Data Protection Impact Assessments (DPIAs) and processing records, the AI Act requires more elaborate documentation of development and design choices for high-risk AI systems and GPAI models to be able to demonstrate accountability.
DPIA and Fundamental Rights Assessments
Both the GDPR and the AI Act require risk assessments.
- Under the GDPR, in specific instances, controllers must carry out a Data Protection Impact Assessment (DPIA).
- Article 26(8) of the AI Act states that where applicable, deployers of high-risk AI systems must use information provided under Article 13 of the AI Act to comply with their GDPR obligation to carry out a DPIA.
- In addition, the AI Act requires certain deployers to carry out a fundamental rights impact assessment (FRIA) before deploying high-risk AI systems mentioned in Annex III of the AI Act.
Processing of Sensitive Personal Data
The AI Act (Article 10(5)) exceptionally allows processing of sensitive personal data, but only if it is strictly necessary for the purpose of ensuring bias detection and correction in high-risk AI systems and under certain conditions. These apply in addition to the GDPR Article 9 requirements which provide exceptions to the general prohibition on processing sensitive (special) personal data.
Automated Decision-Making and Human Oversight
Both the AI Act and GDPR have a form of a human oversight mechanism.
- Article 22 of the GDPR gives data subjects the right not to be subject to a solely automated decision which has a legal or similarly significant effect on them. They have the right to ask for a new decision that is subject to human intervention if they object to such a solely automated decision.
- The AI Act has a stronger regime in place in Article 14, under which high-risk AI systems must be designed with human-machine interface tools enabling effective oversight, and deployers must assign competent oversight personnel.
Incident Reporting
Both regimes implement an incident reporting system, allowing for filing initial as well as subsequent reports if the full scope or extent of an incident is not yet known at the required reporting time.
- Under the GDPR (Article 33), the main incident that can occur is a data breach, which must be notified to the relevant Data Protection Authority (DPA) without undue delay, and, where feasible, not later than 72 hours after the data controller has become aware of the breach.
- Under the AI Act (Article 73), serious incidents should be reported to the regulatory authorities immediately after establishing a causal link between the AI system and the serious incident and in any event, not later than 15 days.
Handling the Dual Compliance Burden
This overview highlights some of the areas where the concepts of the GDPR overlap with those in the AI Act. In terms of the compliance burden, however, there are limited situations in which compliance under one piece of legislation will be sufficient to comply with the other. It is more likely that GDPR compliance processes can be expanded where necessary for AI Act compliance purposes.
Useful actions include:
- Map AI Act roles (provider, deployer, importer/distributor) with GDPR roles (controller/processor).
- Collect adequate information and process such information in relevant guidance for users/data subjects. Regularly review whether the information is up to date.
- Ensure that AI training data that contains personal data is subject to adequate data privacy safeguards.
- Align the GDPR Article 30 requirement to keep a record of processing with Article 10 AI Act data governance requirements.
- Document AI development and monitoring to demonstrate compliance.
- Ensure that the AI Act FRIA and GDPR DPIA are mapped and streamline the workload where appropriate.
- Verify whether processing of sensitive personal data in the context of the AI Act is permitted under both the GDPR and the AI Act.
- Distinguish clearly between post-processing human intervention under the GDPR and human monitoring under the AI Act.
- Implement proper timeline management within your organization, taking into account the fundamental differences between the types of incident that can occur.
- Understand who your regulator(s) are under the GDPR and the AI Act.