The Limited Global Impact of the AI Act

Burying the Brussels Effect? AI Act Inspires Few Copycats

The AI Act, a landmark piece of legislation from Europe, was heralded as an exemplar of the Brussels Effect, a concept where strict regulatory frameworks from the European Union (EU) influence global policy. This effect was notably seen with the implementation of the GDPR, which inspired numerous countries to adopt similar privacy regulations. However, despite the EU’s ambitions with the AI Act, it appears to be inspiring few global imitators.

While the EU’s AI legislation aims to enforce algorithmic transparency, risk-based scrutiny, and stringent compliance requirements, its global impact has been minimal. Currently, only Canada and Brazil are drafting frameworks similar to the AI Act, both of which are currently stalled in legislative processes. In contrast, countries like the UK, Australia, New Zealand, Norway, Switzerland, Singapore, and Japan are pursuing less restrictive paths for AI regulation, prioritizing innovation over compliance.

The Stakes of European Regulation

Europe faces a critical question: will the AI Act retain its relevance and influence in the absence of similar regulations from other major powers? Scholars suggest several reasons for the lack of global uptake of the AI Act. The complexity of the legislation, described as a “patchwork effect” by experts, has made it challenging for regulators outside Europe to adopt its provisions. Even EU lawyers find it difficult to navigate the rules, which complicates the potential for other jurisdictions to use the AI Act as a model.

Moreover, the AI Act’s stringent compliance costs burden start-ups with hefty expenses related to audits and technical documentation that must be produced before launching any AI system. This stands in stark contrast to the regulatory approaches of the UK and Japan, which allow for iterative releases and monitoring of AI systems.

Global Perspectives on AI Regulation

Although the AI Act has been officially in place since June 2024, its detailed regulations are still under development. There exists the possibility that incidents involving AI technologies, such as autonomous vehicle accidents, could push governments to reconsider adopting the European model. A significant failure might prompt politicians to seek a ready-made template, potentially transforming the AI Act from an outlier to a leading standard.

Currently, Canada and Brazil are the most notable countries moving towards the EU’s risk-based model. Canada is working on an Artificial Intelligence and Data Act that closely mirrors the EU’s efforts. However, ongoing parliamentary debates have left the bill’s future uncertain. Similarly, Brazil is considering a legislative proposal that categorizes AI systems based on their risk levels, but industry resistance has already led to the dilution of some provisions.

South Korea’s AI Basic Act, passed in December 2024, also reflects a cautious approach, borrowing terminology from the EU but avoiding the stringent pre-launch inspection requirements. Instead, it emphasizes post-market oversight, allowing for adjustments only after a system is operational. This approach contrasts sharply with the EU’s strict rules, which include outright bans on certain AI techniques and significant penalties for non-compliance.

The Global Landscape of AI Legislation

Other countries, such as the UK, Australia, and New Zealand, have opted for sector-specific guidance rather than a comprehensive legislative overhaul. Japan is focusing on voluntary frameworks, while Singapore supports flexible governance models. Even European neighbors like Norway and Switzerland have chosen to avoid the heavy-handed approach of the AI Act, with Switzerland planning targeted updates to existing laws rather than complete replication of the AI Act.

The lack of widespread acceptance of the AI Act raises significant concerns. Should Europe continue to impose strict regulations while other regions adopt more lenient approaches, investment may flee Europe for more permissive markets. Consequently, the AI Act risks becoming a distinctly European measure—ambitious in theory but limited in practical influence.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...