EU AI Office Seeks Contractors for Compliance Monitoring

The EU AI Act and Its Implications

The EU AI Act is a significant legislative framework aimed at regulating artificial intelligence within the European Union. This act is designed to ensure that AI technologies are safe, ethical, and respect fundamental rights.

AI Office AI Safety Tender

Recently, the AI Office announced a tender worth €9,080,000 for third-party contractors to assist in monitoring compliance with the AI Act. This tender is divided into six lots, each addressing specific systemic risks associated with AI technologies:

  • CBRN (Chemical, Biological, Radiological, and Nuclear)
  • Cyber offence
  • Loss of control
  • Harmful manipulation
  • Sociotechnical risks

These lots will involve various activities such as risk modeling workshops, development of evaluation tools, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interfaces, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks.

Influence of Big Tech on AI Regulations

According to an investigation by Corporate Europe Observatory, Big Tech companies have significantly influenced the weakening of the Code of Practice for general-purpose AI models, which is a crucial component of the AI Act. Despite concerns raised by smaller developers, major corporations like Google, Microsoft, and Amazon had privileged access to the drafting process.

Nearly half of the organizations invited to workshops were from the US, while European civil society representatives faced restricted participation. This trend raises concerns about regulatory overreach and innovation stifling as articulated by these tech giants.

Ongoing Engagement from US Companies

Despite the political landscape’s volatility, US technology companies remain actively engaged in the development of the Code of Practice. Reports indicate that there has been no significant change in attitude towards compliance following the change in American administration. The voluntary code aims to assist AI providers in adhering to the AI Act, yet it has missed its initial publication deadline.

With approximately 1,000 participants involved in the drafting process, the EU Commission aims to finalize the code by August 2, 2025, when relevant rules come into force.

Challenges in Enforcement

With the AI Act approaching its enforcement deadline, concerns have been raised regarding a lack of funding and expertise to effectively implement regulations. European Parliament digital policy advisor Kai Zenner highlighted that many member states are facing financial constraints, making it difficult to enforce the AI Act adequately.

As member states struggle with budget crises, the prioritization of AI innovation over regulation has become a significant concern. Zenner expressed disappointment with the final version of the act, noting that it is vague and contradicts itself, potentially impairing its effectiveness.

Member States’ Compliance Efforts

Data from the European Commission reveals that both Italy and Hungary have failed to appoint the necessary bodies to ensure fundamental rights protection in AI deployment, missing the November 2024 deadline. The Commission is currently working with these states to fulfill their obligations under the AI Act.

Different member states exhibit varying degrees of readiness, with Bulgaria appointing nine authorities and Portugal designating fourteen, while Slovakia has only two.

Comparative Frameworks: Korea vs EU

In a comparative analysis, the AI frameworks of South Korea and the EU reveal both similarities and differences. Both frameworks incorporate tiered classification and transparency requirements; however, South Korea’s approach features simplified risk categorization and lower financial penalties.

Understanding these nuanced differences is essential for companies navigating compliance in multiple jurisdictions, especially as the global landscape of AI regulation continues to evolve.

More Insights

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado's upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for...

Strengthening Ethical AI: Malaysia’s Action Plan for 2026-2030

Malaysia's upcoming AI Technology Action Plan 2026–2030 aims to enhance ethical safeguards and governance frameworks for artificial intelligence, as announced by Digital Minister Gobind Singh Deo. The...

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan's unique...

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU's AI Act, which will take effect on August 2. These guidelines aim to clarify...

Kerala: Pioneering Ethical AI in Education and Public Services

Kerala is emerging as a global leader in ethical AI, particularly in education and public services, by implementing a multi-pronged strategy that emphasizes government vision, academic rigor, and...

States Lead the Charge in AI Regulation

States across the U.S. are rapidly enacting their own AI regulations following the removal of a federal prohibition, leading to a fragmented landscape of laws that businesses must navigate. Key states...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...