EU AI Act’s Burdensome Regulations Could Impair AI Innovation
The EU established the first comprehensive artificial intelligence regulatory framework in the world last year. While only a few of its provisions have gone into effect, the EU AI Act has already proven to be a blueprint for hindering AI development.
The EU has now stunted its own AI development and cemented itself behind the US and China. Instead of focusing on AI investment and growth, the EU jumped straight to regulation—an ill-advised move in a nascent and rapidly evolving industry.
Comparative Approaches
In stark contrast, the US has emphasized substantial AI investments and avoided stringent regulations, yielding significant advantages in the production of four of the world’s leading AI models, while the EU has only one competitive large language model in Mistral.
The EU AI Act creates a risk-based framework that regulates AI systems based on their perceived level of risk. The higher the perceived risk, the stricter the regulatory requirements for those involved in the development, operation, and deployment of the AI system.
Risk Categories
Relying on vague definitions, the act creates four different risk categories: unacceptable risk, high risk, limited risk, and minimal risk.
While AI systems posing unacceptable risk are banned, high-risk AI systems face the strictest regulations among those AI systems permitted in the EU market.
High-risk AI systems could theoretically cause significant harm to the public if misused. This category includes both safety-related components in already regulated products and stand-alone AI systems that could cause harm to public health, safety, fundamental rights, or the environment. Examples include AI-powered medical devices, vehicles, scoring of exams in an education setting, resume screening software, and AI tools used for court rulings.
Regulatory Requirements
Before a high-risk AI system can enter the EU market, it must satisfy strict requirements. This includes implementing pre-deployment risk assessments and mitigation measures, ensuring high-quality datasets to minimize the risk of discriminatory outcomes, maintaining detailed logs to track decision-making, creating documentation of system functionality, and providing human oversight.
These burdensome regulations put AI companies at a competitive disadvantage by driving up compliance costs, delaying product launches, and imposing requirements that are often impractical or impossible to meet. For instance, Article 10 of the EU AI Act mandates that training, validation, and testing datasets be “relevant, sufficiently representative, and to the best extent possible, free of errors and complete” while also possessing “appropriate statistical properties” for their intended users.
This requirement isn’t only vague and subjective but also severely restricts the data available for AI development, limiting innovation and scalability. Furthermore, the act’s overarching goal of eliminating bias is technologically unrealistic, as even the most advanced AI companies have yet to solve this challenge.
Challenges of Compliance
The requirement for detailed logging and traceability also creates significant challenges, driving up data storage costs while increasing privacy and security risks. Extensive logs expose sensitive user data to breaches, and tracking massive real-time data flows may be impractical for resource-strapped companies. Additionally, the inherent opacity of large language models makes full traceability highly complex and costly, further burdening AI developers with compliance requirements.
In a rapidly evolving industry, these regulatory burdens put EU AI companies at a significant competitive disadvantage compared to markets with fewer restrictions.
Broad Definitions and Implications
This competitive disadvantage is further exacerbated by the act’s overly broad definition of “AI systems,” which includes any machine-based system that operates with varying levels of autonomy, may adapt after deployment, and is designed to achieve explicit or implicit objectives by processing inputs to generate outputs—such as predictions, content, recommendations, or decisions—that can influence physical or virtual environments.
This broad definition could include any software, including basic automation tools, traditional algorithms, and long-used statistical models.
Moreover, the EU AI Act imposes regulatory requirements and obligations on stakeholders at every stage of AI development and deployment, including providers, deployers, importers, distributors, and manufacturers of AI systems.
Failure to comply with the EU AI Act can result in maximum fines of up to $36 million or 7% of global annual turnover, whichever is greater.
Stifling Innovation
These onerous regulations and fines may ultimately stifle innovation and lead to AI flight outside the EU, further widening the technological gap between Europe, the US, and China.
Recently, the EU issued voluntary “code of practice” rules as part of the EU AI Act, which were met with opposition from Google and Meta, producers of two of the world’s leading AI models. A Google public affairs official called the code a “step in the wrong direction,” and a Meta official called it “unworkable and technically unfeasible requirements.”
OpenAI chief executive officer Sam Altman also recently warned that the EU’s AI regulations have set the EU back in AI development. This comes almost two years after Altman said that ChatGPT may need to exit the EU market due to the then-proposed EU AI Act.
A Cautionary Tale
As the EU regulates its AI industry into irrelevance, the EU AI Act should serve as a cautionary tale for lawmakers across the US: Risk-based regulatory frameworks will stifle AI innovation.
So far, Colorado is the only state to adopt an EU-style, risk-based regulatory framework in the Colorado Artificial Intelligence Act, which is set to take effect on Feb. 1, 2026. Yet, just nine months after the act’s passage, the Colorado Legislature’s Artificial Intelligence Impact Task Force has already proposed significant amendments, including redefining operative terms, reworking exemption lists, and making other substantial revisions—highlighting the instability of such regulatory approaches vis-à-vis a rapidly developing technology.
Despite Colorado’s early struggles, a Texas state legislator recently introduced an EU-style AI bill, following the same misguided path.
If the US wants to remain a leader in AI development, it must reject the path of overregulation.