The Limited Global Impact of the AI Act

Burying the Brussels Effect? AI Act Inspires Few Copycats

The AI Act, a landmark piece of legislation from Europe, was heralded as an exemplar of the Brussels Effect, a concept where strict regulatory frameworks from the European Union (EU) influence global policy. This effect was notably seen with the implementation of the GDPR, which inspired numerous countries to adopt similar privacy regulations. However, despite the EU’s ambitions with the AI Act, it appears to be inspiring few global imitators.

While the EU’s AI legislation aims to enforce algorithmic transparency, risk-based scrutiny, and stringent compliance requirements, its global impact has been minimal. Currently, only Canada and Brazil are drafting frameworks similar to the AI Act, both of which are currently stalled in legislative processes. In contrast, countries like the UK, Australia, New Zealand, Norway, Switzerland, Singapore, and Japan are pursuing less restrictive paths for AI regulation, prioritizing innovation over compliance.

The Stakes of European Regulation

Europe faces a critical question: will the AI Act retain its relevance and influence in the absence of similar regulations from other major powers? Scholars suggest several reasons for the lack of global uptake of the AI Act. The complexity of the legislation, described as a “patchwork effect” by experts, has made it challenging for regulators outside Europe to adopt its provisions. Even EU lawyers find it difficult to navigate the rules, which complicates the potential for other jurisdictions to use the AI Act as a model.

Moreover, the AI Act’s stringent compliance costs burden start-ups with hefty expenses related to audits and technical documentation that must be produced before launching any AI system. This stands in stark contrast to the regulatory approaches of the UK and Japan, which allow for iterative releases and monitoring of AI systems.

Global Perspectives on AI Regulation

Although the AI Act has been officially in place since June 2024, its detailed regulations are still under development. There exists the possibility that incidents involving AI technologies, such as autonomous vehicle accidents, could push governments to reconsider adopting the European model. A significant failure might prompt politicians to seek a ready-made template, potentially transforming the AI Act from an outlier to a leading standard.

Currently, Canada and Brazil are the most notable countries moving towards the EU’s risk-based model. Canada is working on an Artificial Intelligence and Data Act that closely mirrors the EU’s efforts. However, ongoing parliamentary debates have left the bill’s future uncertain. Similarly, Brazil is considering a legislative proposal that categorizes AI systems based on their risk levels, but industry resistance has already led to the dilution of some provisions.

South Korea’s AI Basic Act, passed in December 2024, also reflects a cautious approach, borrowing terminology from the EU but avoiding the stringent pre-launch inspection requirements. Instead, it emphasizes post-market oversight, allowing for adjustments only after a system is operational. This approach contrasts sharply with the EU’s strict rules, which include outright bans on certain AI techniques and significant penalties for non-compliance.

The Global Landscape of AI Legislation

Other countries, such as the UK, Australia, and New Zealand, have opted for sector-specific guidance rather than a comprehensive legislative overhaul. Japan is focusing on voluntary frameworks, while Singapore supports flexible governance models. Even European neighbors like Norway and Switzerland have chosen to avoid the heavy-handed approach of the AI Act, with Switzerland planning targeted updates to existing laws rather than complete replication of the AI Act.

The lack of widespread acceptance of the AI Act raises significant concerns. Should Europe continue to impose strict regulations while other regions adopt more lenient approaches, investment may flee Europe for more permissive markets. Consequently, the AI Act risks becoming a distinctly European measure—ambitious in theory but limited in practical influence.

More Insights

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

AI Governance: The Key to Successful Enterprise Implementation

Artificial intelligence is at a critical juncture, with many enterprise AI initiatives failing to reach production and exposing organizations to significant risks. Effective AI governance is essential...

AI Code Compliance: Companies May Get a Grace Period

The commission is considering providing a grace period for companies that agree to comply with the new AI Code. This initiative aims to facilitate a smoother transition for businesses adapting to the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Laws in Europe Combatting Deepfakes

Denmark has introduced a law that grants individuals copyright over their likenesses to combat deepfakes, making it illegal to share such content without consent. Other European countries are also...

A Strategic Approach to Ethical AI Implementation

The federal government aims to enhance productivity by implementing artificial intelligence (AI) across various sectors, but emphasizes the importance of thoughtful deployment to avoid wasting public...

Navigating AI Regulation: A New Era for Insurance Compliance

On July 1, 2025, the U.S. Senate voted to reject a proposed ten-year moratorium on state-level AI regulation, allowing individual states to legislate independently. This decision creates a fragmented...

Navigating AI Regulation: A New Era for Insurance Compliance

On July 1, 2025, the U.S. Senate voted to reject a proposed ten-year moratorium on state-level AI regulation, allowing individual states to legislate independently. This decision creates a fragmented...