EU AI Office Seeks Contractors for Compliance Monitoring

The EU AI Act and Its Implications

The EU AI Act is a significant legislative framework aimed at regulating artificial intelligence within the European Union. This act is designed to ensure that AI technologies are safe, ethical, and respect fundamental rights.

AI Office AI Safety Tender

Recently, the AI Office announced a tender worth €9,080,000 for third-party contractors to assist in monitoring compliance with the AI Act. This tender is divided into six lots, each addressing specific systemic risks associated with AI technologies:

  • CBRN (Chemical, Biological, Radiological, and Nuclear)
  • Cyber offence
  • Loss of control
  • Harmful manipulation
  • Sociotechnical risks

These lots will involve various activities such as risk modeling workshops, development of evaluation tools, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interfaces, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks.

Influence of Big Tech on AI Regulations

According to an investigation by Corporate Europe Observatory, Big Tech companies have significantly influenced the weakening of the Code of Practice for general-purpose AI models, which is a crucial component of the AI Act. Despite concerns raised by smaller developers, major corporations like Google, Microsoft, and Amazon had privileged access to the drafting process.

Nearly half of the organizations invited to workshops were from the US, while European civil society representatives faced restricted participation. This trend raises concerns about regulatory overreach and innovation stifling as articulated by these tech giants.

Ongoing Engagement from US Companies

Despite the political landscape’s volatility, US technology companies remain actively engaged in the development of the Code of Practice. Reports indicate that there has been no significant change in attitude towards compliance following the change in American administration. The voluntary code aims to assist AI providers in adhering to the AI Act, yet it has missed its initial publication deadline.

With approximately 1,000 participants involved in the drafting process, the EU Commission aims to finalize the code by August 2, 2025, when relevant rules come into force.

Challenges in Enforcement

With the AI Act approaching its enforcement deadline, concerns have been raised regarding a lack of funding and expertise to effectively implement regulations. European Parliament digital policy advisor Kai Zenner highlighted that many member states are facing financial constraints, making it difficult to enforce the AI Act adequately.

As member states struggle with budget crises, the prioritization of AI innovation over regulation has become a significant concern. Zenner expressed disappointment with the final version of the act, noting that it is vague and contradicts itself, potentially impairing its effectiveness.

Member States’ Compliance Efforts

Data from the European Commission reveals that both Italy and Hungary have failed to appoint the necessary bodies to ensure fundamental rights protection in AI deployment, missing the November 2024 deadline. The Commission is currently working with these states to fulfill their obligations under the AI Act.

Different member states exhibit varying degrees of readiness, with Bulgaria appointing nine authorities and Portugal designating fourteen, while Slovakia has only two.

Comparative Frameworks: Korea vs EU

In a comparative analysis, the AI frameworks of South Korea and the EU reveal both similarities and differences. Both frameworks incorporate tiered classification and transparency requirements; however, South Korea’s approach features simplified risk categorization and lower financial penalties.

Understanding these nuanced differences is essential for companies navigating compliance in multiple jurisdictions, especially as the global landscape of AI regulation continues to evolve.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...