EU’s AI Act: Pioneering Global Standards for Artificial Intelligence

The EU is a Trailblazer, and the AI Act Proves It

On August 2, 2025, the second stage of the EU AI Act came into force, introducing obligations for general purpose models. Initially enacted in February 2025, the AI Act represents the world’s first comprehensive legislation on artificial intelligence, rolling out in multiple stages, with the final implementation phase expected by August 2, 2027.

This landmark regulation is likened to the GDPR for data privacy in the 2010s, establishing a global framework for governing the transformative technology of AI. The urgency for such regulation has been underscored by recent tragedies, including a legal case against OpenAI related to the use of ChatGPT by a teenager, highlighting the risks associated with unregulated AI technologies.

Risk Management Approach

The AI Act adopts a risk management framework aimed at enhancing transparency and accountability for AI systems and their developers. The legislation categorizes AI systems into various risk levels, with the first wave targeting “unacceptable risk” systems that pose clear threats to societal safety. The recent wave, implemented in August 2025, focuses on general purpose AI (GPAI) models, establishing crucial compliance requirements.

Compliance Requirements for GPAI Models

From August 2, 2025, providers of GPAI systems are required to adhere to transparency and copyright obligations when marketing their models within the EU. These obligations extend to organizations operating in the EU, regardless of their location. GPAI models that were available prior to this date must achieve compliance by August 2, 2027.

For the purposes of the law, GPAI models encompass those trained with over 10^23 floating point operations (FLOP) and capable of generating language (text or audio), text-to-image, or text-to-video content. Providers must maintain technical documentation detailing their models, including a comprehensive summary of the training data used.

Systemic Risk Tier

Within the GPAI category, there exists a specialized tier identified as posing systemic risk. Models in this tier require additional compliance measures, including notifying the European Commission when developing such models and ensuring their safety and security. The classification of what constitutes systemic risk may evolve as technology advances. Notably, AI applications related to national security and military purposes are exempt from these regulations, as are open-source systems developed using publicly available code.

GPAI Code of Practice

The European Commission has released a template to assist providers in summarizing the data used for training their models. This GPAI Code of Practice was crafted by independent experts as a voluntary guideline for AI providers to demonstrate compliance with the AI Act. Notable signatories include major tech companies such as Amazon, Google, IBM, and Microsoft. However, some significant players, like Meta, have not signed on.

Criticism and Challenges

The AI Act has faced criticism for its disproportionate impact on startups and small to medium-sized businesses (SMBs), with some experts advocating for exceptions for nascent technologies that have yet to affect the general public significantly. Concerns have been raised that the legislation could stifle progress among European organizations involved in AI development due to its complexity and stringent requirements. Last July, various tech lobbying groups urged the EU to pause the Act’s implementation, arguing that the rollout had been rushed without adequate consideration of potential consequences.

Conclusion

Despite the criticisms, the AI Act has been developed in collaboration with numerous stakeholders across the private sector during a time when businesses are seeking regulatory clarity. It aims to introduce standardized security practices throughout the EU and establish a benchmark for responsible AI governance globally. Following the enactment of the AI Act, other countries, including the US, may find it increasingly challenging to ignore calls for more responsible AI practices, marking a significant milestone in the evolution of technology regulation.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...