EU AI Office Seeks Contractors for Compliance Monitoring

The EU AI Act and Its Implications

The EU AI Act is a significant legislative framework aimed at regulating artificial intelligence within the European Union. This act is designed to ensure that AI technologies are safe, ethical, and respect fundamental rights.

AI Office AI Safety Tender

Recently, the AI Office announced a tender worth €9,080,000 for third-party contractors to assist in monitoring compliance with the AI Act. This tender is divided into six lots, each addressing specific systemic risks associated with AI technologies:

  • CBRN (Chemical, Biological, Radiological, and Nuclear)
  • Cyber offence
  • Loss of control
  • Harmful manipulation
  • Sociotechnical risks

These lots will involve various activities such as risk modeling workshops, development of evaluation tools, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interfaces, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks.

Influence of Big Tech on AI Regulations

According to an investigation by Corporate Europe Observatory, Big Tech companies have significantly influenced the weakening of the Code of Practice for general-purpose AI models, which is a crucial component of the AI Act. Despite concerns raised by smaller developers, major corporations like Google, Microsoft, and Amazon had privileged access to the drafting process.

Nearly half of the organizations invited to workshops were from the US, while European civil society representatives faced restricted participation. This trend raises concerns about regulatory overreach and innovation stifling as articulated by these tech giants.

Ongoing Engagement from US Companies

Despite the political landscape’s volatility, US technology companies remain actively engaged in the development of the Code of Practice. Reports indicate that there has been no significant change in attitude towards compliance following the change in American administration. The voluntary code aims to assist AI providers in adhering to the AI Act, yet it has missed its initial publication deadline.

With approximately 1,000 participants involved in the drafting process, the EU Commission aims to finalize the code by August 2, 2025, when relevant rules come into force.

Challenges in Enforcement

With the AI Act approaching its enforcement deadline, concerns have been raised regarding a lack of funding and expertise to effectively implement regulations. European Parliament digital policy advisor Kai Zenner highlighted that many member states are facing financial constraints, making it difficult to enforce the AI Act adequately.

As member states struggle with budget crises, the prioritization of AI innovation over regulation has become a significant concern. Zenner expressed disappointment with the final version of the act, noting that it is vague and contradicts itself, potentially impairing its effectiveness.

Member States’ Compliance Efforts

Data from the European Commission reveals that both Italy and Hungary have failed to appoint the necessary bodies to ensure fundamental rights protection in AI deployment, missing the November 2024 deadline. The Commission is currently working with these states to fulfill their obligations under the AI Act.

Different member states exhibit varying degrees of readiness, with Bulgaria appointing nine authorities and Portugal designating fourteen, while Slovakia has only two.

Comparative Frameworks: Korea vs EU

In a comparative analysis, the AI frameworks of South Korea and the EU reveal both similarities and differences. Both frameworks incorporate tiered classification and transparency requirements; however, South Korea’s approach features simplified risk categorization and lower financial penalties.

Understanding these nuanced differences is essential for companies navigating compliance in multiple jurisdictions, especially as the global landscape of AI regulation continues to evolve.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...