The EU’s AI Act: Paving the Way for Global Digital Governance

Doubting a “Brussels Effect 2.0”: Can the European Union’s AI Act Foster Legitimacy?

The European Union (EU) is setting the stage for a new era in artificial intelligence (AI) regulation with its AI Act. This legislation has sparked discussions regarding its potential to create a “Brussels Effect 2.0”, where EU standards influence international norms. However, significant challenges remain in establishing a robust framework that can effectively govern generative AI (GenAI) and its implications on international law.

Understanding the AI Act

The EU’s AI Act, which came into effect in 2021, aims to regulate various forms of AI, including GenAI. It categorizes AI technologies based on their risk levels: unacceptable risk, high risk, medium risk, and low risk. GenAI is classified as a high-risk technology, necessitating stringent oversight and transparency in its applications.

The Need for Standardization

Despite the introduction of the AI Act, there are still no universal norms specifically governing GenAI, which complicates its legal status. The absence of a cohesive framework raises questions about intellectual property rights, as ownership of AI-generated works remains contentious. For instance, the 2018 case involving paintings generated by AI algorithms highlighted the complexity of attributing copyright to non-human creators.

Challenges Faced by the AI Act

Three primary challenges hinder the effectiveness of the AI Act in fostering legitimate digital governance:

  • Obscurity of Intellectual Property Laws: The lack of clear guidelines on intellectual property rights for AI-generated content undermines the Act’s legitimacy. Questions about who owns the rights to AI-generated works—whether it’s the algorithm creator, the user, or the commercializer—remain unresolved.
  • Ethical Concerns: The use of GenAI raises ethical issues, particularly in cases of misinformation and data manipulation. The potential for AI to produce misleading content, as seen during the Russo-Ukrainian War, underscores the ethical implications of unregulated AI use.
  • Implementation Gaps: The AI Act may be perceived as radical due to its risk-based categorization. This subjectivity can lead to inconsistencies in how AI is regulated across different jurisdictions, complicating compliance and enforcement.

The Brussels Effect: A Double-Edged Sword

The Brussels Effect refers to the phenomenon where EU regulations influence global standards, often seen as a form of regulatory export. While the AI Act aims to establish a framework that could set international norms, its effectiveness is contingent upon widespread acceptance and implementation by other countries.

Future Directions

To enhance the legitimacy of the AI Act and foster a global standard for AI governance, several steps should be considered:

  • Clarification of Intellectual Property Rights: Developing clear and enforceable guidelines on the ownership of AI-generated content is crucial for fostering compliance and protecting creators’ rights.
  • Strengthening Ethical Standards: Establishing ethical guidelines that address the potential misuse of GenAI can help mitigate risks associated with misinformation and content manipulation.
  • Encouraging International Collaboration: Collaborating with international organizations and stakeholders can facilitate the sharing of best practices, enhancing the legitimacy and acceptance of the AI Act globally.

Conclusion

The EU’s AI Act represents a significant advancement in regulating AI technologies. However, for it to truly effectuate a Brussels Effect 2.0, it must address existing challenges and work towards creating a cohesive, universally accepted framework. By doing so, the EU can position itself as a leader in fostering legitimate and effective governance in the realm of artificial intelligence.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...