EU AI Act Faces Challenges from DeepSeek’s Rise

EU AI Act Rules on GPAI Models Under DeepSeek Review

The mainstream emergence of the Chinese AI application DeepSeek is prompting EU policymakers to reevaluate the regulations outlined in the EU AI Act. This review could lead to significant changes, particularly concerning the threshold measures of computing power specified in the legislation, which may affect the regulation of other general-purpose AI (GPAI) models.

Understanding GPAI Models and Systemic Risk

GPAI models are versatile AI systems capable of performing a wide array of tasks and often serve as the foundation for other AI applications. A prominent example of a GPAI model is large language models (LLMs).

Chapter V of the AI Act delineates specific rules for providers of GPAI models, with compliance expected to commence on August 2, 2025. The strictest regulations are reserved for those GPAI models classified as having systemic risk. For AI developers, determining whether a GPAI model falls into this category is crucial for understanding their obligations under the AI Act.

The concept of systemic risk is defined within the Act as a risk associated with the high-impact capabilities of general-purpose AI models, which can have significant repercussions on the EU market due to their extensive reach or the potential for negative effects on public health, safety, fundamental rights, or society at large.

Classification of GPAI Models with Systemic Risk

According to Article 51(1) of the Act, there are two criteria for classifying a GPAI model as one with systemic risk:

  • If the model possesses “high impact capabilities,” evaluated using appropriate technical tools and methodologies, including indicators and benchmarks.
  • If the European Commission determines that the model has equivalent impact or capabilities based on specific criteria, such as the number of parameters, data set quality, registered end-users, and computational resources used during training.

The Role of FLOPS in Evaluating GPAI Models

Floating point operations (FLOPS) serve as a critical measure of computing power. Defined in the Act as any mathematical operation involving floating-point numbers, it plays a pivotal role in determining whether a GPAI model is presumed to have high impact capabilities. Specifically, Article 51(2) states that a model will be presumed to have high impact capabilities if it utilizes more than 10^25 FLOPS for training.

Providers of GPAI models are expected to be aware of when they exceed this threshold, as the training of general-purpose AI models necessitates substantial planning, including the allocation of compute resources.

Should providers exceed the FLOPS threshold, they must notify the EU AI Office within two weeks. However, they may argue against classification as a GPAI model with systemic risk if they can demonstrate that the model exceptionally does not present such risks.

Potential Changes Driven by DeepSeek’s Emergence

The emergence of DeepSeek, which the developers claim was created at a fraction of the cost of other LLMs without equivalent computing resources, has instigated discussions within the European Commission regarding the necessity to adjust the FLOPS threshold. Commission spokesperson Thomas Regnier emphasized that the Commission is consistently monitoring market developments to assess their impacts on the EU and its citizens.

As the Commission considers how to adapt the thresholds to reflect technological advancements, an increase in the FLOPS threshold could align with efforts to reduce regulatory burdens around AI. Conversely, lowering the threshold could acknowledge DeepSeek’s influence and its potential implications for other developers seeking to minimize compute demands and lower development costs.

Regulatory Implications for GPAI Model Providers

All GPAI model providers are subject to specific obligations, including:

  • Maintaining technical documentation of the model, encompassing its training and testing processes.
  • Providing necessary information to facilitate the integration of their systems with other AI systems.
  • Complying with copyright policies and enabling rightsholders to reserve their rights regarding the use of their works in training.
  • Publishing detailed summaries about the content used for training the GPAI model.

Models classified as GPAI with systemic risk will face additional requirements, such as:

  • Conducting evaluations to identify and mitigate systemic risks.
  • Tracking and reporting serious incidents without undue delay.
  • Ensuring an adequate level of cybersecurity protection for the model and related infrastructure.

The GPAI Code of Practice

The GPAI code of practice is an essential tool designed to assist providers in complying with the GPAI models regime outlined in the AI Act. Although adherence to the code will not be mandatory, following its guidelines will help providers demonstrate compliance with the Act.

The finalized code is expected to catalyze a significant compliance exercise for GPAI model providers. As developers await clarifications regarding the classification of models with systemic risk, it is crucial for them to prepare for the implications of the forthcoming regulatory landscape.

In summary, the distinction between GPAI models and those classified as GPAI models with systemic risk carries significant implications for regulatory obligations and compliance strategies. As the EU navigates the complexities of AI regulation, the impact of emerging technologies like DeepSeek will play a crucial role in shaping the future of AI governance.

More Insights

Protecting Confidentiality in the Age of AI Tools

The post discusses the importance of protecting confidential information when using AI tools, emphasizing the risks associated with sharing sensitive data. It highlights the need for users to be...

Colorado’s AI Law Faces Compliance Challenges After Update Efforts Fail

Colorado's pioneering law on artificial intelligence faced challenges as efforts to update it with Senate Bill 25-318 failed. As a result, employers must prepare to comply with the original law by...

AI Compliance Across Borders: Strategies for Success

The AI Governance & Strategy Summit will address the challenges organizations face in navigating the evolving landscape of AI regulation, focusing on major frameworks like the EU AI Act and the U.S...

Optimizing Federal AI Governance for Innovation

The post emphasizes the importance of effective AI governance in federal agencies to keep pace with rapidly advancing technology. It advocates for frameworks that are adaptive and risk-adjusted to...

Unlocking AI Excellence for Business Success

An AI Center of Excellence (CoE) is crucial for organizations looking to effectively adopt and optimize artificial intelligence technologies. It serves as an innovation hub that provides governance...

AI Regulation: Diverging Paths in Colorado and Utah

In recent developments, Colorado's legislature rejected amendments to its AI Act, while Utah enacted amendments that provide guidelines for mental health chatbots. These contrasting approaches...

Funding and Talent Shortages Threaten EU AI Act Enforcement

Enforcement of the EU AI Act is facing significant challenges due to a lack of funding and expertise, according to European Parliament digital policy advisor Kai Zenner. He highlighted that many...

Strengthening AI Governance in Higher Education

As artificial intelligence (AI) becomes increasingly integrated into higher education, universities must adopt robust governance practices to ensure its responsible use. This involves addressing...

Balancing AI Innovation with Public Safety

Congressman Ted Lieu is committed to balancing AI innovation with safety, advocating for a regulatory framework that fosters technological advancement while ensuring public safety. He emphasizes the...