Europe’s Deregulatory Turn Puts the AI Act at Risk
In August 2024, the EU’s Artificial Intelligence Act was adopted following years of intense negotiations, numerous trilogues, and countless amendments. However, just a month later, it became a target for criticism. Former European Central Bank President Mario Draghi identified it as a regulatory barrier detrimental to the tech sector in his report on European competitiveness.
The European Commission soon announced its intention to pursue five simplification initiatives, including an assessment of the EU digital rulebook to ensure it meets the needs of businesses, particularly small and medium-sized enterprises (SMEs). Among the significant targets for this assessment was the AI Act, alongside the General Data Protection Regulation (GDPR).
Concerns regarding potential weakening of the AI Act heightened following statements made at the French AI Summit. Commission President Ursula Von der Leyen promised to cut red tape, leading to the withdrawal of the proposed AI Liability Directive from the Commission’s work program for 2025. This prompted backlash from civil society and members of the European Parliament.
Despite some reassurance from the European Commission’s AI Continent Action Plan, which focused on providing guidance rather than revisiting the AI Act, the writing was on the wall. The consultation linked to the plan invited stakeholders to propose measures for simplifying the AI Act, signaling a potential shift towards deregulation.
Simplification is a Dangerous Misnomer
The European Commission indicated that the primary focus of simplification would be on the reporting obligations under the AI Act. While the Commission claimed that changes would be targeted, there is a risk that even minor adjustments could have significant implications for AI safety. A key reporting obligation involves notifying authorities about serious incidents that may lead to harm or infringement of fundamental rights.
The AI Act’s reporting requirements are crucial for identifying and mitigating real harms associated with AI technologies. Concerns arise when these obligations, which were already perceived as insufficient, are proposed for simplification. For instance, providers can opt-out of high-risk categorization without notifying regulators, creating loopholes that may hinder accountability.
The Intersection with Existing Laws
Proponents of simplification often argue that the obligations of the AI Act overlap with those in other EU regulations, particularly the GDPR. However, this line of reasoning has been challenged, as the AI Act specifically addresses these intersections and clarifies how to comply with both frameworks.
Even if some aspects of overlap could be better managed, the AI Act provides a pathway through guidelines on its relationship with other EU laws, which are still under development. Any calls for changes under the guise of simplification beyond necessary clarifications should be approached with caution.
Evidence-Based Rule-Making Must Remain Central
While the specifics of any proposed amendments to the AI Act remain undefined, the prospect of reopening a largely unenforceable law raises alarms, particularly in light of industry calls for deregulation. A recent document titled the EU Economic Blueprint from OpenAI criticized EU regulations as impediments to innovation, urging policymakers to identify which rules should be preserved or discarded.
The EU AI Champions Initiative, supporting a significant investment in AI innovation, echoed similar sentiments regarding the AI Act’s market uncertainty due to unclear risk categorization. Such arguments exert pressure on the European Commission to adopt a broad-brush approach in their review, posing risks of substantial deregulation.
The first simplification initiative introduced this year serves as a cautionary tale. The first omnibus package, initially aimed at reducing overlapping obligations, resulted in significant dilution of legal frameworks without adequate consultation. This process has faced criticism from environmental NGOs, highlighting the risks of prioritizing industry interests over individual rights.
The European Commission must learn from these experiences and ensure that any review of the AI Act is rooted in evidence and broadly consulted. As the AI Act faces increasing threats, decision-makers should prioritize robust assessments of any proposed amendments to protect the core strengths of the EU digital rulebook and the fundamental rights secured within the AI Act.