Protecting Human Rights in the EU AI Act: A Call for Stronger Safeguards

Human Rights and the EU AI Act

The EU AI Act, which came into force on August 1, 2024, initiated a co-regulatory process involving a working group of nearly 1,000 stakeholders from AI companies, academia, and civil society organizations. This group is in the final stages of drafting the General Purpose AI Code of Practice, a detailed manual guiding AI developers on compliance with key portions of the AI Act.

Developers adhering to this manual are granted a presumption of compliance with the Act, although they retain the option to comply in alternative ways. However, concerns are growing that the draft of the Code fails to adequately protect human rights.

Concerns Over the Draft Code

The AI Act differentiates between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Those in the latter category must perform risk assessments and mitigate identified risks. A pivotal function of the Code is to outline the risk types that model providers must evaluate.

Initially, the Code included a two-tier approach for risk categories. Yet, in the current draft, the second risk category has transitioned from being “additional” to “optional.” A workshop for civil society participants revealed a push to reduce requirements on AI companies, arguing that this would prevent Europe from missing out on AI innovations.

Optional Risks and Their Implications

The list of optional risks is alarmingly extensive and includes:

  • Risks to public health, safety, or public security, such as risks to critical sectors and infrastructure.
  • Risks to fundamental rights, which encompass freedom of expression, non-discrimination, and privacy concerns.
  • Risks to society as a whole, covering environmental risks, financial system stability, and threats to democratic processes.

Of particular note, risks related to discrimination have shifted from the compulsory list to the optional one. Presently, only four risks necessitate mandatory assessment under the Code: chemical, biological, radiological, and nuclear (CBRN), cyber offense, loss of control, and harmful manipulation.

The Argument Against Human Rights Risks

The drafting team appears to imply that human rights risks are not among the primary “systemic risks” linked to powerful general-purpose AI models. However, public letters from lead negotiators highlight that these risks can arise from the widespread adoption of AI models, not just from their high-impact capabilities.

Ongoing issues, such as discrimination in AI models—stemming from biases within training data—are well-documented. Discrimination is not a theoretical concern but a current reality that requires active assessment and mitigation.

Privacy and Data Concerns

The reliance of AI models on publicly scraped content, often rich in personal data, poses significant privacy challenges. The danger of generating inaccurate or defamatory information about real individuals due to the retention of personal data is increasingly evident.

Neglecting to address these established risks could have profound implications for the EU and its regulatory framework.

International Context and Conclusions

The current Code’s approach not only misinterprets the AI Act but also contradicts ongoing international efforts regarding AI safety. For instance, the Hiroshima Code of Conduct for Advanced AI Systems requires providers to assess and mitigate risks related to privacy, bias, and discrimination.

As global consensus evolves, the latest International AI Safety Report emphasizes a wide range of risks, including manipulation, bias, and environmental concerns associated with general-purpose AI models.

In conclusion, the draft Code of Practice, as it stands, represents a regression due to its inadequate approach to human rights protections. The drafters must take decisive action to ensure that developers consider the human rights risks posed by their models. Failure to uphold the promise of the EU AI Act as a global governance model prioritizing humanity over corporate interests could have far-reaching consequences.

More Insights

Shaping Responsible AI Governance in Healthcare

The AI regulatory landscape has undergone significant changes, with the US and UK adopting more pro-innovation approaches while the EU has shifted its focus as well. This evolving environment presents...

AI Basic Law: Industry Calls for Delay Amid Regulatory Ambiguities

Concerns have been raised that the ambiguous regulatory standards within South Korea's AI basic law could hinder the industry's growth, prompting calls for a three-year postponement of its...

Essential Insights on GDPR and the EU AI Act for Marketers

This article discusses the importance of GDPR compliance and the implications of the EU AI Act for marketers. It highlights the need for transparency, consent, and ethical use of AI in marketing...

Understanding the EU AI Act Risk Pyramid

The EU AI Act employs a risk-based approach to regulate AI systems, categorizing them into four tiers based on the level of risk they present to safety, rights, and societal values. At the top are...

Harnessing Agentic AI: Current Rules and Future Implications

AI companies, including Meta and OpenAI, assert that existing regulations can effectively govern the emerging field of agentic AI, which allows AI systems to perform tasks autonomously. These...

EU’s Unexpected Ban on AI in Online Meetings Raises Concerns

The European Commission has banned the use of AI-powered virtual assistants in online meetings, citing concerns over data privacy and security. This unexpected decision has raised questions about the...

OpenAI Calls for Streamlined AI Regulations in Europe

OpenAI is urging the EU to simplify AI regulations to foster innovation and maintain global competitiveness, warning that complex rules could drive investment to less democratic regions. The...

Designing Ethical AI for a Trustworthy Future

Product designers are crucial in ensuring that artificial intelligence (AI) applications are developed with ethical considerations, focusing on user safety, inclusivity, and transparency. By employing...

Bridging the Gaps in AI Governance

As we stand at a critical juncture in AI’s development, a governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles...