Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft to Sign EU AI Code of Practice as Meta Rejects It

On July 21, 2025, it was reported that Microsoft is likely to sign the European Union’s code of practice aimed at ensuring compliance with the bloc’s artificial intelligence regulations. This announcement was made by Microsoft President Brad Smith during a conversation with journalists from Reuters.

In contrast, Meta Platforms has publicly rejected the code, citing legal uncertainties and concerns that it may impede the development of artificial intelligence technologies within Europe.

Overview of the Code of Practice

The voluntary code of practice, which mandates companies to disclose training data and adhere to EU copyright laws, is part of the broader Artificial Intelligence Act that came into effect in June 2024. This legislation impacts major tech companies including Alphabet, Meta, and OpenAI, requiring them to comply with stringent guidelines.

Smith expressed optimism about Microsoft’s involvement, stating, “I think it’s likely we will sign. We need to read the documents.” He highlighted the necessity of finding a balance between supporting the code and ensuring that the interests of the tech industry are represented.

Meta’s Position

On the other hand, Meta’s chief global affairs officer, Joel Kaplan, articulated the company’s stance against signing the code, arguing that it introduces numerous legal uncertainties for model developers. Kaplan described the measures outlined in the code as excessive and beyond the scope of the AI Act.

Implications for AI Development

The code of practice was developed by a panel of 13 independent experts and aims to provide legal certainty to signatories. Companies that sign will be required to publish summaries of the content used to train their general-purpose AI models and implement policies to comply with EU copyright law. This regulatory framework is designed to foster transparency and accountability in the AI sector.

As the AI landscape continues to evolve, the decisions made by these tech giants will likely have significant implications for the development and deployment of AI technologies in Europe.

Conclusion

In summary, the contrasting positions of Microsoft and Meta highlight the complexities and challenges facing tech companies in navigating regulatory frameworks. As the EU’s AI regulations take shape, the outcomes of these discussions will be pivotal in shaping the future of AI development in Europe.

More Insights

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...