EU AI Office Seeks Contractors for Compliance Monitoring

The EU AI Act and Its Implications

The EU AI Act is a significant legislative framework aimed at regulating artificial intelligence within the European Union. This act is designed to ensure that AI technologies are safe, ethical, and respect fundamental rights.

AI Office AI Safety Tender

Recently, the AI Office announced a tender worth €9,080,000 for third-party contractors to assist in monitoring compliance with the AI Act. This tender is divided into six lots, each addressing specific systemic risks associated with AI technologies:

  • CBRN (Chemical, Biological, Radiological, and Nuclear)
  • Cyber offence
  • Loss of control
  • Harmful manipulation
  • Sociotechnical risks

These lots will involve various activities such as risk modeling workshops, development of evaluation tools, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interfaces, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks.

Influence of Big Tech on AI Regulations

According to an investigation by Corporate Europe Observatory, Big Tech companies have significantly influenced the weakening of the Code of Practice for general-purpose AI models, which is a crucial component of the AI Act. Despite concerns raised by smaller developers, major corporations like Google, Microsoft, and Amazon had privileged access to the drafting process.

Nearly half of the organizations invited to workshops were from the US, while European civil society representatives faced restricted participation. This trend raises concerns about regulatory overreach and innovation stifling as articulated by these tech giants.

Ongoing Engagement from US Companies

Despite the political landscape’s volatility, US technology companies remain actively engaged in the development of the Code of Practice. Reports indicate that there has been no significant change in attitude towards compliance following the change in American administration. The voluntary code aims to assist AI providers in adhering to the AI Act, yet it has missed its initial publication deadline.

With approximately 1,000 participants involved in the drafting process, the EU Commission aims to finalize the code by August 2, 2025, when relevant rules come into force.

Challenges in Enforcement

With the AI Act approaching its enforcement deadline, concerns have been raised regarding a lack of funding and expertise to effectively implement regulations. European Parliament digital policy advisor Kai Zenner highlighted that many member states are facing financial constraints, making it difficult to enforce the AI Act adequately.

As member states struggle with budget crises, the prioritization of AI innovation over regulation has become a significant concern. Zenner expressed disappointment with the final version of the act, noting that it is vague and contradicts itself, potentially impairing its effectiveness.

Member States’ Compliance Efforts

Data from the European Commission reveals that both Italy and Hungary have failed to appoint the necessary bodies to ensure fundamental rights protection in AI deployment, missing the November 2024 deadline. The Commission is currently working with these states to fulfill their obligations under the AI Act.

Different member states exhibit varying degrees of readiness, with Bulgaria appointing nine authorities and Portugal designating fourteen, while Slovakia has only two.

Comparative Frameworks: Korea vs EU

In a comparative analysis, the AI frameworks of South Korea and the EU reveal both similarities and differences. Both frameworks incorporate tiered classification and transparency requirements; however, South Korea’s approach features simplified risk categorization and lower financial penalties.

Understanding these nuanced differences is essential for companies navigating compliance in multiple jurisdictions, especially as the global landscape of AI regulation continues to evolve.

More Insights

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...