Protecting Human Rights in the EU AI Act: A Call for Stronger Safeguards

Human Rights and the EU AI Act

The EU AI Act, which came into force on August 1, 2024, initiated a co-regulatory process involving a working group of nearly 1,000 stakeholders from AI companies, academia, and civil society organizations. This group is in the final stages of drafting the General Purpose AI Code of Practice, a detailed manual guiding AI developers on compliance with key portions of the AI Act.

Developers adhering to this manual are granted a presumption of compliance with the Act, although they retain the option to comply in alternative ways. However, concerns are growing that the draft of the Code fails to adequately protect human rights.

Concerns Over the Draft Code

The AI Act differentiates between “general-purpose AI models” and “general-purpose AI models with systemic risk.” Those in the latter category must perform risk assessments and mitigate identified risks. A pivotal function of the Code is to outline the risk types that model providers must evaluate.

Initially, the Code included a two-tier approach for risk categories. Yet, in the current draft, the second risk category has transitioned from being “additional” to “optional.” A workshop for civil society participants revealed a push to reduce requirements on AI companies, arguing that this would prevent Europe from missing out on AI innovations.

Optional Risks and Their Implications

The list of optional risks is alarmingly extensive and includes:

  • Risks to public health, safety, or public security, such as risks to critical sectors and infrastructure.
  • Risks to fundamental rights, which encompass freedom of expression, non-discrimination, and privacy concerns.
  • Risks to society as a whole, covering environmental risks, financial system stability, and threats to democratic processes.

Of particular note, risks related to discrimination have shifted from the compulsory list to the optional one. Presently, only four risks necessitate mandatory assessment under the Code: chemical, biological, radiological, and nuclear (CBRN), cyber offense, loss of control, and harmful manipulation.

The Argument Against Human Rights Risks

The drafting team appears to imply that human rights risks are not among the primary “systemic risks” linked to powerful general-purpose AI models. However, public letters from lead negotiators highlight that these risks can arise from the widespread adoption of AI models, not just from their high-impact capabilities.

Ongoing issues, such as discrimination in AI models—stemming from biases within training data—are well-documented. Discrimination is not a theoretical concern but a current reality that requires active assessment and mitigation.

Privacy and Data Concerns

The reliance of AI models on publicly scraped content, often rich in personal data, poses significant privacy challenges. The danger of generating inaccurate or defamatory information about real individuals due to the retention of personal data is increasingly evident.

Neglecting to address these established risks could have profound implications for the EU and its regulatory framework.

International Context and Conclusions

The current Code’s approach not only misinterprets the AI Act but also contradicts ongoing international efforts regarding AI safety. For instance, the Hiroshima Code of Conduct for Advanced AI Systems requires providers to assess and mitigate risks related to privacy, bias, and discrimination.

As global consensus evolves, the latest International AI Safety Report emphasizes a wide range of risks, including manipulation, bias, and environmental concerns associated with general-purpose AI models.

In conclusion, the draft Code of Practice, as it stands, represents a regression due to its inadequate approach to human rights protections. The drafters must take decisive action to ensure that developers consider the human rights risks posed by their models. Failure to uphold the promise of the EU AI Act as a global governance model prioritizing humanity over corporate interests could have far-reaching consequences.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...