Protecting Human Rights in the EU AI Act: A Call for Stronger Safeguards

Human Rights and the EU AI Act

The EU AI Act, which came into force on August 1, 2024, initiated a co-regulatory process involving a working group of nearly 1,000 stakeholders from AI companies, academia, and civil society organizations. This group is in the final stages of drafting the General Purpose AI Code of Practice, a detailed manual guiding AI developers on compliance with key portions of the AI Act.

Developers adhering to this manual are granted a presumption of compliance with the Act, although they retain the option to comply in alternative ways. However, concerns are growing that the draft of the Code fails to adequately protect human rights.

Concerns Over the Draft Code

The AI Act differentiates between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Those in the latter category must perform risk assessments and mitigate identified risks. A pivotal function of the Code is to outline the risk types that model providers must evaluate.

Initially, the Code included a two-tier approach for risk categories. Yet, in the current draft, the second risk category has transitioned from being “additional” to “optional.” A workshop for civil society participants revealed a push to reduce requirements on AI companies, arguing that this would prevent Europe from missing out on AI innovations.

Optional Risks and Their Implications

The list of optional risks is alarmingly extensive and includes:

  • Risks to public health, safety, or public security, such as risks to critical sectors and infrastructure.
  • Risks to fundamental rights, which encompass freedom of expression, non-discrimination, and privacy concerns.
  • Risks to society as a whole, covering environmental risks, financial system stability, and threats to democratic processes.

Of particular note, risks related to discrimination have shifted from the compulsory list to the optional one. Presently, only four risks necessitate mandatory assessment under the Code: chemical, biological, radiological, and nuclear (CBRN), cyber offense, loss of control, and harmful manipulation.

The Argument Against Human Rights Risks

The drafting team appears to imply that human rights risks are not among the primary “systemic risks” linked to powerful general-purpose AI models. However, public letters from lead negotiators highlight that these risks can arise from the widespread adoption of AI models, not just from their high-impact capabilities.

Ongoing issues, such as discrimination in AI models—stemming from biases within training data—are well-documented. Discrimination is not a theoretical concern but a current reality that requires active assessment and mitigation.

Privacy and Data Concerns

The reliance of AI models on publicly scraped content, often rich in personal data, poses significant privacy challenges. The danger of generating inaccurate or defamatory information about real individuals due to the retention of personal data is increasingly evident.

Neglecting to address these established risks could have profound implications for the EU and its regulatory framework.

International Context and Conclusions

The current Code’s approach not only misinterprets the AI Act but also contradicts ongoing international efforts regarding AI safety. For instance, the Hiroshima Code of Conduct for Advanced AI Systems requires providers to assess and mitigate risks related to privacy, bias, and discrimination.

As global consensus evolves, the latest International AI Safety Report emphasizes a wide range of risks, including manipulation, bias, and environmental concerns associated with general-purpose AI models.

In conclusion, the draft Code of Practice, as it stands, represents a regression due to its inadequate approach to human rights protections. The drafters must take decisive action to ensure that developers consider the human rights risks posed by their models. Failure to uphold the promise of the EU AI Act as a global governance model prioritizing humanity over corporate interests could have far-reaching consequences.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...