Human Rights and the EU AI Act
The EU AI Act, which came into force on August 1, 2024, initiated a co-regulatory process involving a working group of nearly 1,000 stakeholders from AI companies, academia, and civil society organizations. This group is in the final stages of drafting the General Purpose AI Code of Practice, a detailed manual guiding AI developers on compliance with key portions of the AI Act.
Developers adhering to this manual are granted a presumption of compliance with the Act, although they retain the option to comply in alternative ways. However, concerns are growing that the draft of the Code fails to adequately protect human rights.
Concerns Over the Draft Code
The AI Act differentiates between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Those in the latter category must perform risk assessments and mitigate identified risks. A pivotal function of the Code is to outline the risk types that model providers must evaluate.
Initially, the Code included a two-tier approach for risk categories. Yet, in the current draft, the second risk category has transitioned from being “additional” to “optional.” A workshop for civil society participants revealed a push to reduce requirements on AI companies, arguing that this would prevent Europe from missing out on AI innovations.
Optional Risks and Their Implications
The list of optional risks is alarmingly extensive and includes:
- Risks to public health, safety, or public security, such as risks to critical sectors and infrastructure.
- Risks to fundamental rights, which encompass freedom of expression, non-discrimination, and privacy concerns.
- Risks to society as a whole, covering environmental risks, financial system stability, and threats to democratic processes.
Of particular note, risks related to discrimination have shifted from the compulsory list to the optional one. Presently, only four risks necessitate mandatory assessment under the Code: chemical, biological, radiological, and nuclear (CBRN), cyber offense, loss of control, and harmful manipulation.
The Argument Against Human Rights Risks
The drafting team appears to imply that human rights risks are not among the primary “systemic risks” linked to powerful general-purpose AI models. However, public letters from lead negotiators highlight that these risks can arise from the widespread adoption of AI models, not just from their high-impact capabilities.
Ongoing issues, such as discrimination in AI models—stemming from biases within training data—are well-documented. Discrimination is not a theoretical concern but a current reality that requires active assessment and mitigation.
Privacy and Data Concerns
The reliance of AI models on publicly scraped content, often rich in personal data, poses significant privacy challenges. The danger of generating inaccurate or defamatory information about real individuals due to the retention of personal data is increasingly evident.
Neglecting to address these established risks could have profound implications for the EU and its regulatory framework.
International Context and Conclusions
The current Code’s approach not only misinterprets the AI Act but also contradicts ongoing international efforts regarding AI safety. For instance, the Hiroshima Code of Conduct for Advanced AI Systems requires providers to assess and mitigate risks related to privacy, bias, and discrimination.
As global consensus evolves, the latest International AI Safety Report emphasizes a wide range of risks, including manipulation, bias, and environmental concerns associated with general-purpose AI models.
In conclusion, the draft Code of Practice, as it stands, represents a regression due to its inadequate approach to human rights protections. The drafters must take decisive action to ensure that developers consider the human rights risks posed by their models. Failure to uphold the promise of the EU AI Act as a global governance model prioritizing humanity over corporate interests could have far-reaching consequences.