New Code of Practice for AI Compliance Set for 2025

Code of Practice for AI Compliance in the EU

The European Union (EU) is set to introduce a code of practice aimed at assisting companies in adhering to its groundbreaking artificial intelligence (AI) regulations. However, implementation of this code may not occur until the end of 2025.

Background and Context

On July 3, 2025, the European Commission announced that the code of practice, designed to aid compliance with the EU’s AI Act, is in the pipeline. This initiative comes amid extensive lobbying from major technology companies and various European businesses, who have expressed concerns over the stringent AI rules.

Companies such as Alphabet (Google), Meta Platforms, and several European firms, including Mistral and ASML, have been vocal about the need for a delay in the enforcement of these regulations, primarily due to the absence of a clear code of practice.

Implications of the Code of Practice

The code of practice is intended to provide legal certainty to organizations utilizing AI technologies. It aims to clarify the quality standards that businesses and their customers can expect from general-purpose AI services, thereby minimizing the risk of misleading claims by companies.

Although signing up for the code is voluntary, companies that choose not to participate will miss out on the legal protections that come with being a signatory. This aspect of the code has raised concerns among industry advocates regarding the potential for non-compliance.

Key Features and Timeline

The publication of the code for large language models (GPAI) was initially scheduled for May 2. However, the European Commission has indicated that it will present the code in the coming days and anticipates that companies will begin to sign up next month. The guidance is expected to take effect by the end of the year.

As it stands, the AI Act will become legally binding on August 2, 2025, but enforcement will not commence for new models until a year later. Existing AI models will be granted a two-year grace period, allowing them until August 2, 2027, to comply with the new regulations.

Challenges and Criticism

In light of the proposed delays, the European Commission has reaffirmed its commitment to the objectives of the AI Act, which include establishing harmonized, risk-based rules throughout the EU and ensuring the safety of AI systems in the market.

Critics, such as the campaign group Corporate Europe Observatory, have condemned the influence of major tech firms on the regulatory process. They argue that the industry’s lobbying efforts aim to weaken essential protections against biased and unfair AI practices.

Conclusion

The impending code of practice represents a significant step in the EU’s efforts to regulate AI technology effectively. By establishing clear guidelines and fostering compliance, the EU aims to create a safer and more reliable AI landscape for businesses and consumers alike. As the deadline approaches, the tech industry watches closely, with the outcomes of these regulations set to influence the future of AI deployment across the continent.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...