EU AI Act Faces Challenges from DeepSeek’s Rise

EU AI Act Rules on GPAI Models Under DeepSeek Review

The mainstream emergence of the Chinese AI application DeepSeek is prompting EU policymakers to reevaluate the regulations outlined in the EU AI Act. This review could lead to significant changes, particularly concerning the threshold measures of computing power specified in the legislation, which may affect the regulation of other general-purpose AI (GPAI) models.

Understanding GPAI Models and Systemic Risk

GPAI models are versatile AI systems capable of performing a wide array of tasks and often serve as the foundation for other AI applications. A prominent example of a GPAI model is large language models (LLMs).

Chapter V of the AI Act delineates specific rules for providers of GPAI models, with compliance expected to commence on August 2, 2025. The strictest regulations are reserved for those GPAI models classified as having systemic risk. For AI developers, determining whether a GPAI model falls into this category is crucial for understanding their obligations under the AI Act.

The concept of systemic risk is defined within the Act as a risk associated with the high-impact capabilities of general-purpose AI models, which can have significant repercussions on the EU market due to their extensive reach or the potential for negative effects on public health, safety, fundamental rights, or society at large.

Classification of GPAI Models with Systemic Risk

According to Article 51(1) of the Act, there are two criteria for classifying a GPAI model as one with systemic risk:

  • If the model possesses “high impact capabilities,” evaluated using appropriate technical tools and methodologies, including indicators and benchmarks.
  • If the European Commission determines that the model has equivalent impact or capabilities based on specific criteria, such as the number of parameters, data set quality, registered end-users, and computational resources used during training.

The Role of FLOPS in Evaluating GPAI Models

Floating point operations (FLOPS) serve as a critical measure of computing power. Defined in the Act as any mathematical operation involving floating-point numbers, it plays a pivotal role in determining whether a GPAI model is presumed to have high impact capabilities. Specifically, Article 51(2) states that a model will be presumed to have high impact capabilities if it utilizes more than 10^25 FLOPS for training.

Providers of GPAI models are expected to be aware of when they exceed this threshold, as the training of general-purpose AI models necessitates substantial planning, including the allocation of compute resources.

Should providers exceed the FLOPS threshold, they must notify the EU AI Office within two weeks. However, they may argue against classification as a GPAI model with systemic risk if they can demonstrate that the model exceptionally does not present such risks.

Potential Changes Driven by DeepSeek’s Emergence

The emergence of DeepSeek, which the developers claim was created at a fraction of the cost of other LLMs without equivalent computing resources, has instigated discussions within the European Commission regarding the necessity to adjust the FLOPS threshold. Commission spokesperson Thomas Regnier emphasized that the Commission is consistently monitoring market developments to assess their impacts on the EU and its citizens.

As the Commission considers how to adapt the thresholds to reflect technological advancements, an increase in the FLOPS threshold could align with efforts to reduce regulatory burdens around AI. Conversely, lowering the threshold could acknowledge DeepSeek’s influence and its potential implications for other developers seeking to minimize compute demands and lower development costs.

Regulatory Implications for GPAI Model Providers

All GPAI model providers are subject to specific obligations, including:

  • Maintaining technical documentation of the model, encompassing its training and testing processes.
  • Providing necessary information to facilitate the integration of their systems with other AI systems.
  • Complying with copyright policies and enabling rightsholders to reserve their rights regarding the use of their works in training.
  • Publishing detailed summaries about the content used for training the GPAI model.

Models classified as GPAI with systemic risk will face additional requirements, such as:

  • Conducting evaluations to identify and mitigate systemic risks.
  • Tracking and reporting serious incidents without undue delay.
  • Ensuring an adequate level of cybersecurity protection for the model and related infrastructure.

The GPAI Code of Practice

The GPAI code of practice is an essential tool designed to assist providers in complying with the GPAI models regime outlined in the AI Act. Although adherence to the code will not be mandatory, following its guidelines will help providers demonstrate compliance with the Act.

The finalized code is expected to catalyze a significant compliance exercise for GPAI model providers. As developers await clarifications regarding the classification of models with systemic risk, it is crucial for them to prepare for the implications of the forthcoming regulatory landscape.

In summary, the distinction between GPAI models and those classified as GPAI models with systemic risk carries significant implications for regulatory obligations and compliance strategies. As the EU navigates the complexities of AI regulation, the impact of emerging technologies like DeepSeek will play a crucial role in shaping the future of AI governance.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...