EU AI Act: A Barrier to Innovation

EU AI Act’s Burdensome Regulations Could Impair AI Innovation

The EU established the first comprehensive artificial intelligence regulatory framework in the world last year. While only a few of its provisions have gone into effect, the EU AI Act has already proven to be a blueprint for hindering AI development.

The EU has now stunted its own AI development and cemented itself behind the US and China. Instead of focusing on AI investment and growth, the EU jumped straight to regulation—an ill-advised move in a nascent and rapidly evolving industry.

Comparative Approaches

In stark contrast, the US has emphasized substantial AI investments and avoided stringent regulations, yielding significant advantages in the production of four of the world’s leading AI models, while the EU has only one competitive large language model in Mistral.

The EU AI Act creates a risk-based framework that regulates AI systems based on their perceived level of risk. The higher the perceived risk, the stricter the regulatory requirements for those involved in the development, operation, and deployment of the AI system.

Risk Categories

Relying on vague definitions, the act creates four different risk categories: unacceptable risk, high risk, limited risk, and minimal risk.

While AI systems posing unacceptable risk are banned, high-risk AI systems face the strictest regulations among those AI systems permitted in the EU market.

High-risk AI systems could theoretically cause significant harm to the public if misused. This category includes both safety-related components in already regulated products and stand-alone AI systems that could cause harm to public health, safety, fundamental rights, or the environment. Examples include AI-powered medical devices, vehicles, scoring of exams in an education setting, resume screening software, and AI tools used for court rulings.

Regulatory Requirements

Before a high-risk AI system can enter the EU market, it must satisfy strict requirements. This includes implementing pre-deployment risk assessments and mitigation measures, ensuring high-quality datasets to minimize the risk of discriminatory outcomes, maintaining detailed logs to track decision-making, creating documentation of system functionality, and providing human oversight.

These burdensome regulations put AI companies at a competitive disadvantage by driving up compliance costs, delaying product launches, and imposing requirements that are often impractical or impossible to meet. For instance, Article 10 of the EU AI Act mandates that training, validation, and testing datasets be “relevant, sufficiently representative, and to the best extent possible, free of errors and complete” while also possessing “appropriate statistical properties” for their intended users.

This requirement isn’t only vague and subjective but also severely restricts the data available for AI development, limiting innovation and scalability. Furthermore, the act’s overarching goal of eliminating bias is technologically unrealistic, as even the most advanced AI companies have yet to solve this challenge.

Challenges of Compliance

The requirement for detailed logging and traceability also creates significant challenges, driving up data storage costs while increasing privacy and security risks. Extensive logs expose sensitive user data to breaches, and tracking massive real-time data flows may be impractical for resource-strapped companies. Additionally, the inherent opacity of large language models makes full traceability highly complex and costly, further burdening AI developers with compliance requirements.

In a rapidly evolving industry, these regulatory burdens put EU AI companies at a significant competitive disadvantage compared to markets with fewer restrictions.

Broad Definitions and Implications

This competitive disadvantage is further exacerbated by the act’s overly broad definition of “AI systems,” which includes any machine-based system that operates with varying levels of autonomy, may adapt after deployment, and is designed to achieve explicit or implicit objectives by processing inputs to generate outputs—such as predictions, content, recommendations, or decisions—that can influence physical or virtual environments.

This broad definition could include any software, including basic automation tools, traditional algorithms, and long-used statistical models.

Moreover, the EU AI Act imposes regulatory requirements and obligations on stakeholders at every stage of AI development and deployment, including providers, deployers, importers, distributors, and manufacturers of AI systems.

Failure to comply with the EU AI Act can result in maximum fines of up to $36 million or 7% of global annual turnover, whichever is greater.

Stifling Innovation

These onerous regulations and fines may ultimately stifle innovation and lead to AI flight outside the EU, further widening the technological gap between Europe, the US, and China.

Recently, the EU issued voluntary “code of practice” rules as part of the EU AI Act, which were met with opposition from Google and Meta, producers of two of the world’s leading AI models. A Google public affairs official called the code a “step in the wrong direction,” and a Meta official called it “unworkable and technically unfeasible requirements.”

OpenAI chief executive officer Sam Altman also recently warned that the EU’s AI regulations have set the EU back in AI development. This comes almost two years after Altman said that ChatGPT may need to exit the EU market due to the then-proposed EU AI Act.

A Cautionary Tale

As the EU regulates its AI industry into irrelevance, the EU AI Act should serve as a cautionary tale for lawmakers across the US: Risk-based regulatory frameworks will stifle AI innovation.

So far, Colorado is the only state to adopt an EU-style, risk-based regulatory framework in the Colorado Artificial Intelligence Act, which is set to take effect on Feb. 1, 2026. Yet, just nine months after the act’s passage, the Colorado Legislature’s Artificial Intelligence Impact Task Force has already proposed significant amendments, including redefining operative terms, reworking exemption lists, and making other substantial revisions—highlighting the instability of such regulatory approaches vis-à-vis a rapidly developing technology.

Despite Colorado’s early struggles, a Texas state legislator recently introduced an EU-style AI bill, following the same misguided path.

If the US wants to remain a leader in AI development, it must reject the path of overregulation.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...