New Code of Practice for AI Compliance Set for 2025

Code of Practice for AI Compliance in the EU

The European Union (EU) is set to introduce a code of practice aimed at assisting companies in adhering to its groundbreaking artificial intelligence (AI) regulations. However, implementation of this code may not occur until the end of 2025.

Background and Context

On July 3, 2025, the European Commission announced that the code of practice, designed to aid compliance with the EU’s AI Act, is in the pipeline. This initiative comes amid extensive lobbying from major technology companies and various European businesses, who have expressed concerns over the stringent AI rules.

Companies such as Alphabet (Google), Meta Platforms, and several European firms, including Mistral and ASML, have been vocal about the need for a delay in the enforcement of these regulations, primarily due to the absence of a clear code of practice.

Implications of the Code of Practice

The code of practice is intended to provide legal certainty to organizations utilizing AI technologies. It aims to clarify the quality standards that businesses and their customers can expect from general-purpose AI services, thereby minimizing the risk of misleading claims by companies.

Although signing up for the code is voluntary, companies that choose not to participate will miss out on the legal protections that come with being a signatory. This aspect of the code has raised concerns among industry advocates regarding the potential for non-compliance.

Key Features and Timeline

The publication of the code for large language models (GPAI) was initially scheduled for May 2. However, the European Commission has indicated that it will present the code in the coming days and anticipates that companies will begin to sign up next month. The guidance is expected to take effect by the end of the year.

As it stands, the AI Act will become legally binding on August 2, 2025, but enforcement will not commence for new models until a year later. Existing AI models will be granted a two-year grace period, allowing them until August 2, 2027, to comply with the new regulations.

Challenges and Criticism

In light of the proposed delays, the European Commission has reaffirmed its commitment to the objectives of the AI Act, which include establishing harmonized, risk-based rules throughout the EU and ensuring the safety of AI systems in the market.

Critics, such as the campaign group Corporate Europe Observatory, have condemned the influence of major tech firms on the regulatory process. They argue that the industry’s lobbying efforts aim to weaken essential protections against biased and unfair AI practices.

Conclusion

The impending code of practice represents a significant step in the EU’s efforts to regulate AI technology effectively. By establishing clear guidelines and fostering compliance, the EU aims to create a safer and more reliable AI landscape for businesses and consumers alike. As the deadline approaches, the tech industry watches closely, with the outcomes of these regulations set to influence the future of AI deployment across the continent.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...