Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft to Sign EU AI Code of Practice as Meta Rejects It

On July 21, 2025, it was reported that Microsoft is likely to sign the European Union’s code of practice aimed at ensuring compliance with the bloc’s artificial intelligence regulations. This announcement was made by Microsoft President Brad Smith during a conversation with journalists from Reuters.

In contrast, Meta Platforms has publicly rejected the code, citing legal uncertainties and concerns that it may impede the development of artificial intelligence technologies within Europe.

Overview of the Code of Practice

The voluntary code of practice, which mandates companies to disclose training data and adhere to EU copyright laws, is part of the broader Artificial Intelligence Act that came into effect in June 2024. This legislation impacts major tech companies including Alphabet, Meta, and OpenAI, requiring them to comply with stringent guidelines.

Smith expressed optimism about Microsoft’s involvement, stating, “I think it’s likely we will sign. We need to read the documents.” He highlighted the necessity of finding a balance between supporting the code and ensuring that the interests of the tech industry are represented.

Meta’s Position

On the other hand, Meta’s chief global affairs officer, Joel Kaplan, articulated the company’s stance against signing the code, arguing that it introduces numerous legal uncertainties for model developers. Kaplan described the measures outlined in the code as excessive and beyond the scope of the AI Act.

Implications for AI Development

The code of practice was developed by a panel of 13 independent experts and aims to provide legal certainty to signatories. Companies that sign will be required to publish summaries of the content used to train their general-purpose AI models and implement policies to comply with EU copyright law. This regulatory framework is designed to foster transparency and accountability in the AI sector.

As the AI landscape continues to evolve, the decisions made by these tech giants will likely have significant implications for the development and deployment of AI technologies in Europe.

Conclusion

In summary, the contrasting positions of Microsoft and Meta highlight the complexities and challenges facing tech companies in navigating regulatory frameworks. As the EU’s AI regulations take shape, the outcomes of these discussions will be pivotal in shaping the future of AI development in Europe.

More Insights

Responsible AI Strategies for Enterprise Success

In this post, Joseph Jude discusses the complexities of implementing Responsible AI in enterprise applications, emphasizing the conflict between ideal principles and real-world business pressures. He...

EU Guidelines on AI Models: Preparing for Systemic Risk Compliance

The European Commission has issued guidelines to assist AI models identified as having systemic risks in complying with the EU's artificial intelligence regulation, known as the AI Act. Companies face...

Governance in the Age of AI: Balancing Opportunity and Risk

Artificial intelligence (AI) is rapidly transforming business operations and decision-making processes in the Philippines, with the domestic AI market projected to reach nearly $950 million by 2025...

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado's upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for...

Strengthening Ethical AI: Malaysia’s Action Plan for 2026-2030

Malaysia's upcoming AI Technology Action Plan 2026–2030 aims to enhance ethical safeguards and governance frameworks for artificial intelligence, as announced by Digital Minister Gobind Singh Deo. The...

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan's unique...

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU's AI Act, which will take effect on August 2. These guidelines aim to clarify...