The EU AI Act: Balancing Innovation and Regulation

A Double-Edged Sword: Will the EU AI Act Stifle or Encourage Technological Innovation?

The recent Paris summit of global leaders to discuss artificial intelligence has thrust AI into the spotlight once again. The February 2025 Paris Artificial Intelligence Action Summit represented an attempt to strengthen international action in favor of a more sustainable AI serving collective progress and general interest.

On 2 February 2025, the EU’s cautious regulatory approach was demonstrated with the coming into force of the first aspects of the EU AI Act. This amounts to an outright ban on AI that poses an ‘unacceptable risk’, regardless of whether these systems were placed on the market before or after that date.

Purpose and Scope of the EU AI Act

The purpose of the EU AI Act is to lay down a uniformed legal framework for the development and use of AI systems, while ensuring a high level of protection of public interests, including health, safety, and fundamental rights.

The contrast between the EU and the US approach to AI is vivid. Speaking at the summit in Paris, US Vice President JD Vance stated: ‘We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.’ With America arguably leading the AI revolution, such a position could have long-term repercussions.

The EU AI Act came into force on 1 August 2024, but different provisions are scheduled to apply later. Most provisions will come into effect on 2 August 2026. The implementation of the first stages of the EU AI Act will not only impact EU-based companies but also foreign businesses, as Article 2 of the Act sets out its extraterritorial effect, where the output of an AI system is used within the EU.

Global Regulatory Benchmark

The EU AI Act could become a global regulatory benchmark, similar to the GDPR. Some may seek to harmonize with the EU AI Act as a matter of best practice and risk management.

Risk-Based Approach of the EU AI Act

The EU AI Act takes a risk-based approach tailored to the system’s level of risk:

  • Unacceptable risk AI systems, which deploy manipulative, exploitative, social control, or surveillance practices, are now banned.
  • High-risk AI systems, which create a significant risk of harm to an individual’s health, safety, or fundamental rights, are regulated.
  • Limited risk AI systems, such as chatbots or deepfakes, are subject to transparency obligations.
  • Minimal risk AI systems, such as spam filters, are unregulated by the EU AI Act but remain subject to applicable regulations like the GDPR.

The European Commission published two sets of guidelines on AI system definition and on prohibited AI practices to increase legal clarity and ensure uniform application of the EU AI Act. However, these broad guidelines are ‘non-binding’, and the interpretation of the EU AI Act is a matter for the Court of Justice of the European Union.

Implications for Businesses

UK businesses trading into the EU, whether as providers, deployers, importers, distributors, or representatives of AI systems, should ensure that they are not using AI systems that provide social scoring of individuals or conduct untargeted scraping of facial images to populate facial recognition databases. These systems are now prohibited in the EU.

The penalties for non-compliance are substantial, with infringements relating to prohibited AI potentially attracting 7% of the offender’s global annual turnover or €35 million – whichever is greater.

Exceptions and Support for Innovation

There are exceptions to the AI Act, particularly regarding AI systems developed for private use, scientific research, or those released under free and open-source licenses (except when they qualify as high-risk). Additionally, the AI Act supports innovation by enabling the creation of AI regulatory sandboxes, which provide a controlled environment for the development, training, and testing of innovative AI systems. Specific measures for small and medium-sized enterprises and startups were also implemented to help them enter the AI market and become competitors to established businesses.

The Future of AI in the EU

Will this be enough to make the EU an attractive destination for AI research and startups, or will these various provisions stifle innovation? A legal framework for the development and use of AI systems in accordance with EU values and fundamental rights may help increase users’ trust in AI, thereby boosting demand in this field. To drive innovation, it was announced that French President Emmanuel Macron committed around €100 billion in AI-related investments in France during the recent summit in Paris.

However, any kind of incentive will not relieve businesses of the burden of complying not only with the EU AI Act but also with other applicable European regulations. For instance, any AI system processing personal data must comply with the GDPR. In this respect, the French Data Protection Authority (CNIL) has already published recommendations for ensuring compliance with the GDPR.

It remains to be seen whether the EU will emerge as a true AI innovator, but the race to lead the AI revolution is widely regarded as a competition between the US and China.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...