US Pushes Back Against EU AI Regulations, Leaving Enterprises to Set Their Own Standards

US Seeks to Eliminate the EU AI Act’s Code of Practice

The ongoing debate regarding the regulation of artificial intelligence (AI) has intensified, particularly concerning the European Union (EU) AI Act. Despite its intentions to foster a more transparent and copyright-conscious AI development landscape, critics argue that the accompanying rulebook may hinder innovation and impose excessive burdens on enterprises.

Context of the EU AI Act

The EU AI Act aims to establish a comprehensive framework for the deployment of general-purpose AI (GPAI) models, especially those that pose systemic risks. As stakeholders work on drafting the code of practice, the US government has voiced concerns over its implementation, with reports indicating that President Donald Trump is pressuring European regulators to abandon the proposed rulebook.

Critics from the US assert that the code of practice could stifle innovation and extend the reach of the existing AI law by introducing new, unnecessary compliance requirements. The Mission to the EU has actively reached out to European authorities to express opposition to the rulebook in its current form, emphasizing the potentially burdensome nature of its obligations.

Shifting Responsibilities in AI Compliance

The European Commission has described the code of practice as a vital tool for AI providers aiming to demonstrate compliance with the EU AI Act. Although voluntary, the code is intended to assist organizations in meeting regulations related to transparency, copyright, and risk mitigation.

The drafting process involves a diverse group of stakeholders, including AI model providers, industry organizations, and civil society representatives, overseen by the European AI Office. The deadline for the completion of this code is set for the end of April, with the final version scheduled for presentation to EU representatives in May.

Implications for Enterprises

As the regulatory landscape evolves, the responsibility for ensuring responsible AI practices is shifting from vendors to the organizations deploying AI technologies. Experts indicate that any business operating within Europe must develop its own AI risk playbooks, which should include measures such as privacy impact assessments, provenance logs, and red-team testing. This proactive approach is essential to mitigate potential contractual, regulatory, and reputational risks.

Non-compliance Risks

The consequences of non-compliance with the EU AI Act could be severe, including fines that could reach up to 7% of global revenue. This underlines the importance of organizations adapting to the evolving compliance landscape, regardless of the eventual outcome of the EU’s deliberations on the code of practice.

Potential for a Lighter Regulatory Touch

If the US administration’s approach to AI legislation gains traction globally, it could result in a more lenient regulatory environment with diminished federal oversight. Recent actions, including Executive Order 14179, indicate a shift towards reducing barriers to American leadership in AI, with new guidelines emphasizing economic competitiveness over stringent regulatory measures.

As the landscape of AI regulation continues to change, both domestic and international stakeholders must navigate the complexities of compliance while fostering innovation in AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...