Protecting Human Rights in the EU AI Act: A Call for Stronger Safeguards

Human Rights and the EU AI Act

The EU AI Act, which came into force on August 1, 2024, initiated a co-regulatory process involving a working group of nearly 1,000 stakeholders from AI companies, academia, and civil society organizations. This group is in the final stages of drafting the General Purpose AI Code of Practice, a detailed manual guiding AI developers on compliance with key portions of the AI Act.

Developers adhering to this manual are granted a presumption of compliance with the Act, although they retain the option to comply in alternative ways. However, concerns are growing that the draft of the Code fails to adequately protect human rights.

Concerns Over the Draft Code

The AI Act differentiates between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Those in the latter category must perform risk assessments and mitigate identified risks. A pivotal function of the Code is to outline the risk types that model providers must evaluate.

Initially, the Code included a two-tier approach for risk categories. Yet, in the current draft, the second risk category has transitioned from being “additional” to “optional.” A workshop for civil society participants revealed a push to reduce requirements on AI companies, arguing that this would prevent Europe from missing out on AI innovations.

Optional Risks and Their Implications

The list of optional risks is alarmingly extensive and includes:

  • Risks to public health, safety, or public security, such as risks to critical sectors and infrastructure.
  • Risks to fundamental rights, which encompass freedom of expression, non-discrimination, and privacy concerns.
  • Risks to society as a whole, covering environmental risks, financial system stability, and threats to democratic processes.

Of particular note, risks related to discrimination have shifted from the compulsory list to the optional one. Presently, only four risks necessitate mandatory assessment under the Code: chemical, biological, radiological, and nuclear (CBRN), cyber offense, loss of control, and harmful manipulation.

The Argument Against Human Rights Risks

The drafting team appears to imply that human rights risks are not among the primary “systemic risks” linked to powerful general-purpose AI models. However, public letters from lead negotiators highlight that these risks can arise from the widespread adoption of AI models, not just from their high-impact capabilities.

Ongoing issues, such as discrimination in AI models—stemming from biases within training data—are well-documented. Discrimination is not a theoretical concern but a current reality that requires active assessment and mitigation.

Privacy and Data Concerns

The reliance of AI models on publicly scraped content, often rich in personal data, poses significant privacy challenges. The danger of generating inaccurate or defamatory information about real individuals due to the retention of personal data is increasingly evident.

Neglecting to address these established risks could have profound implications for the EU and its regulatory framework.

International Context and Conclusions

The current Code’s approach not only misinterprets the AI Act but also contradicts ongoing international efforts regarding AI safety. For instance, the Hiroshima Code of Conduct for Advanced AI Systems requires providers to assess and mitigate risks related to privacy, bias, and discrimination.

As global consensus evolves, the latest International AI Safety Report emphasizes a wide range of risks, including manipulation, bias, and environmental concerns associated with general-purpose AI models.

In conclusion, the draft Code of Practice, as it stands, represents a regression due to its inadequate approach to human rights protections. The drafters must take decisive action to ensure that developers consider the human rights risks posed by their models. Failure to uphold the promise of the EU AI Act as a global governance model prioritizing humanity over corporate interests could have far-reaching consequences.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...