EU Lawmakers Challenge Deregulation of AI Act Amid Concerns for Safety

EU Lawmakers Oppose Voluntary AI Act Compliance Rules

The European Commission is facing significant opposition from EU lawmakers regarding its proposal to make more requirements of the AI Act voluntary. This move is seen as potentially beneficial to large AI developers such as Google and OpenAI, who would experience a decrease in compliance burdens.

Key Takeaways

  • The European Commission is reportedly considering making parts of the AI Act voluntary.
  • This proposal could favor major AI developers.
  • However, it is encountering pushback from the EU Parliament.

As the European Commission explores deregulation as a means to enhance AI investment, the intention to dilute the requirements of the landmark AI Act has sparked intense debate. Lawmakers from the EU Parliament are firmly opposing this proposal, emphasizing the importance of maintaining stringent standards in AI development.

Deregulation on the Agenda

Since the AI Act was enacted in 2024, the political landscape has shifted dramatically. Originally designed with safety and responsibility in mind, the call for a more laissez-faire approach arises from concerns regarding European competitiveness. Proponents of deregulation argue that easing restrictions could foster innovation and economic growth.

In a significant development, the Commission removed a proposed AI liability directive from its 2025 work program, citing “no foreseeable agreement” on the legislation. This decision aligns with the Commission’s broader agenda to cut bureaucratic red tape and streamline regulations impacting businesses.

U.S. Pressure

The EU’s push for deregulation is not occurring in isolation; it is influenced by external pressures, notably from the U.S. administration. A recent White House memorandum explicitly referenced the EU’s Digital Markets Act and Digital Services Act, indicating a growing apprehension that the AI Act could pose a threat to American businesses.

In this geopolitical context, U.S. tech giants like Google and OpenAI stand to gain significantly from a relaxation of the AI Act. If compliance requirements are made voluntary, these companies could operate under a much lighter regulatory framework.

MEPs Warn Against Weakening the AI Act

Members of the European Parliament (MEPs) who were instrumental in negotiating the AI Act have voiced strong objections to the Commission’s plans. They argue that weakening the Act would be “dangerous” and “undemocratic.” A letter drafted by these MEPs warns that failing to hold AI developers to high standards of safety and security could have severe repercussions for Europe’s economy and democracy.

The hierarchy of EU lawmaking complicates the process, as MEPs have limited power to block the Commission’s proposed changes. However, member states retain significant influence and may push back against deregulation efforts.

Notably, the letter opposing the weakening of the AI Act garnered support from Carme Artigas, a key negotiator on behalf of member states during the Act’s drafting. This coalition may provide a counterbalance to the Commission’s efforts, particularly as countries like France have historically resisted stricter AI regulations.

As the debate continues, the future of the AI Act remains uncertain, with potential implications for the landscape of AI development and regulation in Europe.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...