EU Strategies for Defining AI Act Regulations on General-Purpose AI

EU Moves to Clarify AI Act Scope for General-Purpose AI

On April 29, 2025, EU policymakers are considering new threshold measures of computational resources that will assist businesses in determining the regulatory requirements for AI models they train or modify under the EU AI Act.

Proposed Thresholds

The proposed thresholds outlined by the EU AI Office in a working document aim to clarify the scope of rules applicable to general-purpose AI (GPAI) models. This document represents current thoughts on the matter, but it is important to note that it is not final or binding.

A survey has been initiated by the European Commission to gather industry feedback, which is expected to shape new guidelines for the GPAI regime, set to take effect on August 2, 2025.

Importance of Guidance

Experts, such as Dr. Nils Rauer, emphasize the necessity for initial guidance regarding the language and implications of the regulatory regime. This guidance will help shape the regulatory landscape, even without binding court decisions to provide clarity.

Obligations for GPAI Models

Under the AI Act, providers of GPAI models will face several record-keeping and disclosure obligations. These include:

  • Documenting the model’s training and testing process
  • Sharing information to facilitate integration of AI systems
  • Drafting an EU law-compliant copyright policy
  • Publishing a detailed summary about the content used for model training

Models categorized as having systemic risk will incur additional obligations, including requirements around evaluation, testing, and risk mitigation.

Code of Practice Development

The Commission is finalizing a code of practice for GPAI, which is anticipated to be published on May 2, 2025. Adherence to this code will not be mandatory; however, signatories will benefit from increased trust among stakeholders.

Defining General-Purpose AI

GPAI is defined as an AI model capable of performing a wide range of tasks and can be integrated into various downstream systems. Models used solely for research or development before market release are not included in this definition.

If the AI Office’s current proposals are approved, models capable of generating text or images will be considered GPAI if their training compute exceeds 1022 floating point operations (FLOP).

Modification and Compliance

Businesses that modify GPAI models would be presumed to be providers if their computational usage exceeds one-third of 1022 FLOP. These businesses would only be responsible for compliance regarding their modifications.

Challenges with Current Metrics

The choice of FLOP as a metric has garnered criticism. Experts assert it may not effectively distinguish between regulatory obligations under the AI Act. The AI Office acknowledges that training compute is an imperfect proxy for generality and capabilities.

Regulatory Exemptions

The working document also suggests potential regulatory exemptions for AI models available under a free and open-source license. However, these exemptions would not apply if providers collect personal data from users of the model.

Conclusion

The actions taken by the EU illustrate an attempt to create a comprehensive regulatory framework that addresses the complexities of AI technologies. As the AI landscape evolves, the need for clear definitions and compliance requirements will become increasingly crucial for both businesses and regulators.

More Insights

Regulating AI Chatbots: A Call for Clearer Guidelines

The Molly Rose Foundation has criticized Ofcom for its unclear response to the regulation of AI chatbots, which may pose significant risks to public safety. The charity's CEO emphasized the urgent...

Architecting Compliance: Building Medical AI Chatbots Under the EU AI Act

The EU AI Act redefines the landscape for developing medical AI chatbots, positioning them as "high-risk" systems that require stringent compliance measures. Embracing these regulations not only...

Bridging Divides in AI Safety Dialogue

Despite numerous AI governance events, a comprehensive framework for AI safety has yet to be established, highlighting the need for focused dialogue among stakeholders. A dual-track approach that...

Empowering Security Teams in the Era of AI Agents

Microsoft Security VP Vasu Jakkal emphasized the importance of governance and diversity in the evolving landscape of cybersecurity, particularly with the rise of agentic AI. As organizations adopt...

Understanding ISO 42001: A Framework for Responsible AI

ISO 42001 is the world’s first international standard dedicated to the management of Artificial Intelligence, focusing on governance, accountability, and lifecycle risk management. This new standard...

EU Strategies for Defining AI Act Regulations on General-Purpose AI

EU policymakers are considering setting threshold measures of computational resources to help businesses determine the regulatory requirements for AI models they train or modify under the EU AI Act...

AI Regulation: Building Trust in an Evolving Landscape

As AI adoption accelerates globally, governments are rapidly developing ethical and legal frameworks to ensure compliance and mitigate risks associated with AI technologies. The EU's AI Act and other...

Global Standards for AI in Healthcare: A WHO Initiative

The World Health Organization (WHO) has launched a global initiative to establish a unified governance framework for artificial intelligence (AI) in healthcare, focusing on safety, ethics, and...

AI Adoption and Trust: Bridging the Governance Gap

A recent KPMG study reveals that while 70% of U.S. workers are eager to leverage AI's benefits, 75% remain concerned about potential negative outcomes, leading to low trust in AI. Nearly half of...