The EU AI Act and Its Implications
The EU AI Act is a significant legislative framework aimed at regulating artificial intelligence within the European Union. This act is designed to ensure that AI technologies are safe, ethical, and respect fundamental rights.
AI Office AI Safety Tender
Recently, the AI Office announced a tender worth €9,080,000 for third-party contractors to assist in monitoring compliance with the AI Act. This tender is divided into six lots, each addressing specific systemic risks associated with AI technologies:
- CBRN (Chemical, Biological, Radiological, and Nuclear)
- Cyber offence
- Loss of control
- Harmful manipulation
- Sociotechnical risks
These lots will involve various activities such as risk modeling workshops, development of evaluation tools, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interfaces, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks.
Influence of Big Tech on AI Regulations
According to an investigation by Corporate Europe Observatory, Big Tech companies have significantly influenced the weakening of the Code of Practice for general-purpose AI models, which is a crucial component of the AI Act. Despite concerns raised by smaller developers, major corporations like Google, Microsoft, and Amazon had privileged access to the drafting process.
Nearly half of the organizations invited to workshops were from the US, while European civil society representatives faced restricted participation. This trend raises concerns about regulatory overreach and innovation stifling as articulated by these tech giants.
Ongoing Engagement from US Companies
Despite the political landscape’s volatility, US technology companies remain actively engaged in the development of the Code of Practice. Reports indicate that there has been no significant change in attitude towards compliance following the change in American administration. The voluntary code aims to assist AI providers in adhering to the AI Act, yet it has missed its initial publication deadline.
With approximately 1,000 participants involved in the drafting process, the EU Commission aims to finalize the code by August 2, 2025, when relevant rules come into force.
Challenges in Enforcement
With the AI Act approaching its enforcement deadline, concerns have been raised regarding a lack of funding and expertise to effectively implement regulations. European Parliament digital policy advisor Kai Zenner highlighted that many member states are facing financial constraints, making it difficult to enforce the AI Act adequately.
As member states struggle with budget crises, the prioritization of AI innovation over regulation has become a significant concern. Zenner expressed disappointment with the final version of the act, noting that it is vague and contradicts itself, potentially impairing its effectiveness.
Member States’ Compliance Efforts
Data from the European Commission reveals that both Italy and Hungary have failed to appoint the necessary bodies to ensure fundamental rights protection in AI deployment, missing the November 2024 deadline. The Commission is currently working with these states to fulfill their obligations under the AI Act.
Different member states exhibit varying degrees of readiness, with Bulgaria appointing nine authorities and Portugal designating fourteen, while Slovakia has only two.
Comparative Frameworks: Korea vs EU
In a comparative analysis, the AI frameworks of South Korea and the EU reveal both similarities and differences. Both frameworks incorporate tiered classification and transparency requirements; however, South Korea’s approach features simplified risk categorization and lower financial penalties.
Understanding these nuanced differences is essential for companies navigating compliance in multiple jurisdictions, especially as the global landscape of AI regulation continues to evolve.