The EU AI Act: Balancing Innovation and Regulation

A Double-Edged Sword: Will the EU AI Act Stifle or Encourage Technological Innovation?

The recent Paris summit of global leaders to discuss artificial intelligence has thrust AI into the spotlight once again. The February 2025 Paris Artificial Intelligence Action Summit represented an attempt to strengthen international action in favor of a more sustainable AI serving collective progress and general interest.

On 2 February 2025, the EU’s cautious regulatory approach was demonstrated with the coming into force of the first aspects of the EU AI Act. This amounts to an outright ban on AI that poses an ‘unacceptable risk’, regardless of whether these systems were placed on the market before or after that date.

Purpose and Scope of the EU AI Act

The purpose of the EU AI Act is to lay down a uniformed legal framework for the development and use of AI systems, while ensuring a high level of protection of public interests, including health, safety, and fundamental rights.

The contrast between the EU and the US approach to AI is vivid. Speaking at the summit in Paris, US Vice President JD Vance stated: ‘We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.’ With America arguably leading the AI revolution, such a position could have long-term repercussions.

The EU AI Act came into force on 1 August 2024, but different provisions are scheduled to apply later. Most provisions will come into effect on 2 August 2026. The implementation of the first stages of the EU AI Act will not only impact EU-based companies but also foreign businesses, as Article 2 of the Act sets out its extraterritorial effect, where the output of an AI system is used within the EU.

Global Regulatory Benchmark

The EU AI Act could become a global regulatory benchmark, similar to the GDPR. Some may seek to harmonize with the EU AI Act as a matter of best practice and risk management.

Risk-Based Approach of the EU AI Act

The EU AI Act takes a risk-based approach tailored to the system’s level of risk:

  • Unacceptable risk AI systems, which deploy manipulative, exploitative, social control, or surveillance practices, are now banned.
  • High-risk AI systems, which create a significant risk of harm to an individual’s health, safety, or fundamental rights, are regulated.
  • Limited risk AI systems, such as chatbots or deepfakes, are subject to transparency obligations.
  • Minimal risk AI systems, such as spam filters, are unregulated by the EU AI Act but remain subject to applicable regulations like the GDPR.

The European Commission published two sets of guidelines on AI system definition and on prohibited AI practices to increase legal clarity and ensure uniform application of the EU AI Act. However, these broad guidelines are ‘non-binding’, and the interpretation of the EU AI Act is a matter for the Court of Justice of the European Union.

Implications for Businesses

UK businesses trading into the EU, whether as providers, deployers, importers, distributors, or representatives of AI systems, should ensure that they are not using AI systems that provide social scoring of individuals or conduct untargeted scraping of facial images to populate facial recognition databases. These systems are now prohibited in the EU.

The penalties for non-compliance are substantial, with infringements relating to prohibited AI potentially attracting 7% of the offender’s global annual turnover or €35 million – whichever is greater.

Exceptions and Support for Innovation

There are exceptions to the AI Act, particularly regarding AI systems developed for private use, scientific research, or those released under free and open-source licenses (except when they qualify as high-risk). Additionally, the AI Act supports innovation by enabling the creation of AI regulatory sandboxes, which provide a controlled environment for the development, training, and testing of innovative AI systems. Specific measures for small and medium-sized enterprises and startups were also implemented to help them enter the AI market and become competitors to established businesses.

The Future of AI in the EU

Will this be enough to make the EU an attractive destination for AI research and startups, or will these various provisions stifle innovation? A legal framework for the development and use of AI systems in accordance with EU values and fundamental rights may help increase users’ trust in AI, thereby boosting demand in this field. To drive innovation, it was announced that French President Emmanuel Macron committed around €100 billion in AI-related investments in France during the recent summit in Paris.

However, any kind of incentive will not relieve businesses of the burden of complying not only with the EU AI Act but also with other applicable European regulations. For instance, any AI system processing personal data must comply with the GDPR. In this respect, the French Data Protection Authority (CNIL) has already published recommendations for ensuring compliance with the GDPR.

It remains to be seen whether the EU will emerge as a true AI innovator, but the race to lead the AI revolution is widely regarded as a competition between the US and China.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...