EU AI Act Faces Challenges from DeepSeek’s Rise

EU AI Act Rules on GPAI Models Under DeepSeek Review

The mainstream emergence of the Chinese AI application DeepSeek is prompting EU policymakers to reevaluate the regulations outlined in the EU AI Act. This review could lead to significant changes, particularly concerning the threshold measures of computing power specified in the legislation, which may affect the regulation of other general-purpose AI (GPAI) models.

Understanding GPAI Models and Systemic Risk

GPAI models are versatile AI systems capable of performing a wide array of tasks and often serve as the foundation for other AI applications. A prominent example of a GPAI model is large language models (LLMs).

Chapter V of the AI Act delineates specific rules for providers of GPAI models, with compliance expected to commence on August 2, 2025. The strictest regulations are reserved for those GPAI models classified as having systemic risk. For AI developers, determining whether a GPAI model falls into this category is crucial for understanding their obligations under the AI Act.

The concept of systemic risk is defined within the Act as a risk associated with the high-impact capabilities of general-purpose AI models, which can have significant repercussions on the EU market due to their extensive reach or the potential for negative effects on public health, safety, fundamental rights, or society at large.

Classification of GPAI Models with Systemic Risk

According to Article 51(1) of the Act, there are two criteria for classifying a GPAI model as one with systemic risk:

  • If the model possesses “high impact capabilities,” evaluated using appropriate technical tools and methodologies, including indicators and benchmarks.
  • If the European Commission determines that the model has equivalent impact or capabilities based on specific criteria, such as the number of parameters, data set quality, registered end-users, and computational resources used during training.

The Role of FLOPS in Evaluating GPAI Models

Floating point operations (FLOPS) serve as a critical measure of computing power. Defined in the Act as any mathematical operation involving floating-point numbers, it plays a pivotal role in determining whether a GPAI model is presumed to have high impact capabilities. Specifically, Article 51(2) states that a model will be presumed to have high impact capabilities if it utilizes more than 10^25 FLOPS for training.

Providers of GPAI models are expected to be aware of when they exceed this threshold, as the training of general-purpose AI models necessitates substantial planning, including the allocation of compute resources.

Should providers exceed the FLOPS threshold, they must notify the EU AI Office within two weeks. However, they may argue against classification as a GPAI model with systemic risk if they can demonstrate that the model exceptionally does not present such risks.

Potential Changes Driven by DeepSeek’s Emergence

The emergence of DeepSeek, which the developers claim was created at a fraction of the cost of other LLMs without equivalent computing resources, has instigated discussions within the European Commission regarding the necessity to adjust the FLOPS threshold. Commission spokesperson Thomas Regnier emphasized that the Commission is consistently monitoring market developments to assess their impacts on the EU and its citizens.

As the Commission considers how to adapt the thresholds to reflect technological advancements, an increase in the FLOPS threshold could align with efforts to reduce regulatory burdens around AI. Conversely, lowering the threshold could acknowledge DeepSeek’s influence and its potential implications for other developers seeking to minimize compute demands and lower development costs.

Regulatory Implications for GPAI Model Providers

All GPAI model providers are subject to specific obligations, including:

  • Maintaining technical documentation of the model, encompassing its training and testing processes.
  • Providing necessary information to facilitate the integration of their systems with other AI systems.
  • Complying with copyright policies and enabling rightsholders to reserve their rights regarding the use of their works in training.
  • Publishing detailed summaries about the content used for training the GPAI model.

Models classified as GPAI with systemic risk will face additional requirements, such as:

  • Conducting evaluations to identify and mitigate systemic risks.
  • Tracking and reporting serious incidents without undue delay.
  • Ensuring an adequate level of cybersecurity protection for the model and related infrastructure.

The GPAI Code of Practice

The GPAI code of practice is an essential tool designed to assist providers in complying with the GPAI models regime outlined in the AI Act. Although adherence to the code will not be mandatory, following its guidelines will help providers demonstrate compliance with the Act.

The finalized code is expected to catalyze a significant compliance exercise for GPAI model providers. As developers await clarifications regarding the classification of models with systemic risk, it is crucial for them to prepare for the implications of the forthcoming regulatory landscape.

In summary, the distinction between GPAI models and those classified as GPAI models with systemic risk carries significant implications for regulatory obligations and compliance strategies. As the EU navigates the complexities of AI regulation, the impact of emerging technologies like DeepSeek will play a crucial role in shaping the future of AI governance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...