EU Lawmakers Challenge Deregulation of AI Act Amid Concerns for Safety

EU Lawmakers Oppose Voluntary AI Act Compliance Rules

The European Commission is facing significant opposition from EU lawmakers regarding its proposal to make more requirements of the AI Act voluntary. This move is seen as potentially beneficial to large AI developers such as Google and OpenAI, who would experience a decrease in compliance burdens.

Key Takeaways

  • The European Commission is reportedly considering making parts of the AI Act voluntary.
  • This proposal could favor major AI developers.
  • However, it is encountering pushback from the EU Parliament.

As the European Commission explores deregulation as a means to enhance AI investment, the intention to dilute the requirements of the landmark AI Act has sparked intense debate. Lawmakers from the EU Parliament are firmly opposing this proposal, emphasizing the importance of maintaining stringent standards in AI development.

Deregulation on the Agenda

Since the AI Act was enacted in 2024, the political landscape has shifted dramatically. Originally designed with safety and responsibility in mind, the call for a more laissez-faire approach arises from concerns regarding European competitiveness. Proponents of deregulation argue that easing restrictions could foster innovation and economic growth.

In a significant development, the Commission removed a proposed AI liability directive from its 2025 work program, citing “no foreseeable agreement” on the legislation. This decision aligns with the Commission’s broader agenda to cut bureaucratic red tape and streamline regulations impacting businesses.

U.S. Pressure

The EU’s push for deregulation is not occurring in isolation; it is influenced by external pressures, notably from the U.S. administration. A recent White House memorandum explicitly referenced the EU’s Digital Markets Act and Digital Services Act, indicating a growing apprehension that the AI Act could pose a threat to American businesses.

In this geopolitical context, U.S. tech giants like Google and OpenAI stand to gain significantly from a relaxation of the AI Act. If compliance requirements are made voluntary, these companies could operate under a much lighter regulatory framework.

MEPs Warn Against Weakening the AI Act

Members of the European Parliament (MEPs) who were instrumental in negotiating the AI Act have voiced strong objections to the Commission’s plans. They argue that weakening the Act would be “dangerous” and “undemocratic.” A letter drafted by these MEPs warns that failing to hold AI developers to high standards of safety and security could have severe repercussions for Europe’s economy and democracy.

The hierarchy of EU lawmaking complicates the process, as MEPs have limited power to block the Commission’s proposed changes. However, member states retain significant influence and may push back against deregulation efforts.

Notably, the letter opposing the weakening of the AI Act garnered support from Carme Artigas, a key negotiator on behalf of member states during the Act’s drafting. This coalition may provide a counterbalance to the Commission’s efforts, particularly as countries like France have historically resisted stricter AI regulations.

As the debate continues, the future of the AI Act remains uncertain, with potential implications for the landscape of AI development and regulation in Europe.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...