The Limited Global Impact of the AI Act

Burying the Brussels Effect? AI Act Inspires Few Copycats

The AI Act, a landmark piece of legislation from Europe, was heralded as an exemplar of the Brussels Effect, a concept where strict regulatory frameworks from the European Union (EU) influence global policy. This effect was notably seen with the implementation of the GDPR, which inspired numerous countries to adopt similar privacy regulations. However, despite the EU’s ambitions with the AI Act, it appears to be inspiring few global imitators.

While the EU’s AI legislation aims to enforce algorithmic transparency, risk-based scrutiny, and stringent compliance requirements, its global impact has been minimal. Currently, only Canada and Brazil are drafting frameworks similar to the AI Act, both of which are currently stalled in legislative processes. In contrast, countries like the UK, Australia, New Zealand, Norway, Switzerland, Singapore, and Japan are pursuing less restrictive paths for AI regulation, prioritizing innovation over compliance.

The Stakes of European Regulation

Europe faces a critical question: will the AI Act retain its relevance and influence in the absence of similar regulations from other major powers? Scholars suggest several reasons for the lack of global uptake of the AI Act. The complexity of the legislation, described as a “patchwork effect” by experts, has made it challenging for regulators outside Europe to adopt its provisions. Even EU lawyers find it difficult to navigate the rules, which complicates the potential for other jurisdictions to use the AI Act as a model.

Moreover, the AI Act’s stringent compliance costs burden start-ups with hefty expenses related to audits and technical documentation that must be produced before launching any AI system. This stands in stark contrast to the regulatory approaches of the UK and Japan, which allow for iterative releases and monitoring of AI systems.

Global Perspectives on AI Regulation

Although the AI Act has been officially in place since June 2024, its detailed regulations are still under development. There exists the possibility that incidents involving AI technologies, such as autonomous vehicle accidents, could push governments to reconsider adopting the European model. A significant failure might prompt politicians to seek a ready-made template, potentially transforming the AI Act from an outlier to a leading standard.

Currently, Canada and Brazil are the most notable countries moving towards the EU’s risk-based model. Canada is working on an Artificial Intelligence and Data Act that closely mirrors the EU’s efforts. However, ongoing parliamentary debates have left the bill’s future uncertain. Similarly, Brazil is considering a legislative proposal that categorizes AI systems based on their risk levels, but industry resistance has already led to the dilution of some provisions.

South Korea’s AI Basic Act, passed in December 2024, also reflects a cautious approach, borrowing terminology from the EU but avoiding the stringent pre-launch inspection requirements. Instead, it emphasizes post-market oversight, allowing for adjustments only after a system is operational. This approach contrasts sharply with the EU’s strict rules, which include outright bans on certain AI techniques and significant penalties for non-compliance.

The Global Landscape of AI Legislation

Other countries, such as the UK, Australia, and New Zealand, have opted for sector-specific guidance rather than a comprehensive legislative overhaul. Japan is focusing on voluntary frameworks, while Singapore supports flexible governance models. Even European neighbors like Norway and Switzerland have chosen to avoid the heavy-handed approach of the AI Act, with Switzerland planning targeted updates to existing laws rather than complete replication of the AI Act.

The lack of widespread acceptance of the AI Act raises significant concerns. Should Europe continue to impose strict regulations while other regions adopt more lenient approaches, investment may flee Europe for more permissive markets. Consequently, the AI Act risks becoming a distinctly European measure—ambitious in theory but limited in practical influence.

More Insights

AI Compliance Risks: Safeguarding Against Emerging Threats

The rapid growth of artificial intelligence (AI), particularly generative AI, presents both opportunities and significant risks for businesses regarding compliance with legal and regulatory...

Building Effective AI Literacy Programs for Compliance and Success

The EU AI Act mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and others involved in AI operations. This obligation applies to anyone...

Ethics at the Crossroads of AI Innovation

As artificial intelligence (AI) increasingly influences critical decision-making across various sectors, the need for robust ethical governance frameworks becomes essential. Organizations must...

Croatia’s Path to Responsible AI Legislation

EDRi affiliate Politiscope hosted an event in Croatia to discuss the human rights impacts of Artificial Intelligence (AI) and to influence national policy ahead of the implementation of the EU AI Act...

The Legal Dilemma of AI Personhood

As artificial intelligence systems evolve to make decisions and act independently, the legal frameworks that govern them are struggling to keep pace. This raises critical questions about whether AI...

Data Provenance: The Foundation of Effective AI Governance for CISOs

The article emphasizes the critical role of data provenance in ensuring effective AI governance within organizations, highlighting the need for continuous oversight and accountability in AI...

Balancing AI Governance in the Philippines

A lawmaker in the Philippines, Senator Grace Poe, emphasizes the need for a balanced approach in regulating artificial intelligence (AI) to ensure ethical and innovative use of the technology. She...

China’s Open-Source Strategy: Redefining AI Governance

China's advancements in artificial intelligence (AI) are increasingly driven by open-source collaboration among tech giants like Alibaba, Baidu, and Tencent, positioning the country to influence...

Mastering AI Governance: Nine Essential Steps

As organizations increasingly adopt artificial intelligence (AI), it is essential to implement effective AI governance to ensure data integrity, accountability, and security. The nine-point framework...