EU AI Act: Key Updates and Future Implications

EU AI Act Update: Navigating the Future

In a significant move that underscores the European Union’s commitment to shaping the future of artificial intelligence (AI), the European Commission has announced the continuation of its planned AI regulations without any delays.

On July 10, 2025, the Commission published the General-Purpose AI (GPAI) Code of Practice, along with answers to frequently asked questions (FAQs) aimed at aiding compliance with the EU AI Act’s requirements.

Quick Hits

  • The European Commission has confirmed that there will be no delay in the implementation of the EU AI Act.
  • On July 10, 2025, the GPAI Code of Practice and related FAQs were published. The GPAI Code of Practice aims to aid compliance with the AI Act’s obligations.

A Firm Stance on AI Regulation

The Commission’s decision to forge ahead with AI regulations reflects its proactive stance on ensuring that AI technologies are developed and deployed responsibly. The regulations, which have been in the works for several years, aim to create a comprehensive framework that addresses the ethical, legal, and societal implications of AI. Recently, major technology players and European businesses have urged the Commission to delay the AI Act by years.

Adding to the momentum, the European Commission received the final version of the GPAI Code of Practice. This landmark document is poised to set the standard for ethical AI deployment across Europe, ensuring that AI technologies are developed and used in ways that are transparent, accountable, and aligned with fundamental human rights.

The GPAI Code of Practice is voluntary for organizations looking to demonstrate compliance with the EU AI Act. The Commission expects that organizations that agree to follow the guidelines will experience a “reduced administrative burden” and “more legal certainty” compared to those that choose alternative compliance methods.

The GPAI Code of Practice consists of three chapters: “Transparency” and “Copyright,” both of which address all providers of general-purpose AI models, and “Safety and Security,” which is relevant only to a limited number of providers of the most advanced models. Within the “Transparency” chapter, there is a model documentation form for providers to document compliance with the AI Act’s transparency requirements.

Looking Ahead

As the European Commission moves forward with the implementation of the GPAI Code of Practice, the focus will be on fostering a culture of ethical AI development. This will involve ongoing dialogue with stakeholders, continuous monitoring of AI systems, and regular updates to the GPAI Code of Practice to keep pace with technological advancements.

Other key future dates to note are:

  • August 2, 2025: Starting from this date, provisions regarding notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2026: Guidelines are expected to be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus non-high-risk systems.
  • August 2, 2026: The remainder of the legislation will take effect, with the exception of a minor provision regarding specific types of high-risk AI systems, which will go into effect on August 1, 2027.

In the meantime, the final version of the GPAI Code of Practice marks a significant milestone in the implementation of the AI Act.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...