Doubting a “Brussels Effect 2.0”: Can the European Union’s AI Act Foster Legitimacy?
The European Union (EU) is setting the stage for a new era in artificial intelligence (AI) regulation with its AI Act. This legislation has sparked discussions regarding its potential to create a “Brussels Effect 2.0”, where EU standards influence international norms. However, significant challenges remain in establishing a robust framework that can effectively govern generative AI (GenAI) and its implications on international law.
Understanding the AI Act
The EU’s AI Act, which came into effect in 2021, aims to regulate various forms of AI, including GenAI. It categorizes AI technologies based on their risk levels: unacceptable risk, high risk, medium risk, and low risk. GenAI is classified as a high-risk technology, necessitating stringent oversight and transparency in its applications.
The Need for Standardization
Despite the introduction of the AI Act, there are still no universal norms specifically governing GenAI, which complicates its legal status. The absence of a cohesive framework raises questions about intellectual property rights, as ownership of AI-generated works remains contentious. For instance, the 2018 case involving paintings generated by AI algorithms highlighted the complexity of attributing copyright to non-human creators.
Challenges Faced by the AI Act
Three primary challenges hinder the effectiveness of the AI Act in fostering legitimate digital governance:
- Obscurity of Intellectual Property Laws: The lack of clear guidelines on intellectual property rights for AI-generated content undermines the Act’s legitimacy. Questions about who owns the rights to AI-generated works—whether it’s the algorithm creator, the user, or the commercializer—remain unresolved.
- Ethical Concerns: The use of GenAI raises ethical issues, particularly in cases of misinformation and data manipulation. The potential for AI to produce misleading content, as seen during the Russo-Ukrainian War, underscores the ethical implications of unregulated AI use.
- Implementation Gaps: The AI Act may be perceived as radical due to its risk-based categorization. This subjectivity can lead to inconsistencies in how AI is regulated across different jurisdictions, complicating compliance and enforcement.
The Brussels Effect: A Double-Edged Sword
The Brussels Effect refers to the phenomenon where EU regulations influence global standards, often seen as a form of regulatory export. While the AI Act aims to establish a framework that could set international norms, its effectiveness is contingent upon widespread acceptance and implementation by other countries.
Future Directions
To enhance the legitimacy of the AI Act and foster a global standard for AI governance, several steps should be considered:
- Clarification of Intellectual Property Rights: Developing clear and enforceable guidelines on the ownership of AI-generated content is crucial for fostering compliance and protecting creators’ rights.
- Strengthening Ethical Standards: Establishing ethical guidelines that address the potential misuse of GenAI can help mitigate risks associated with misinformation and content manipulation.
- Encouraging International Collaboration: Collaborating with international organizations and stakeholders can facilitate the sharing of best practices, enhancing the legitimacy and acceptance of the AI Act globally.
Conclusion
The EU’s AI Act represents a significant advancement in regulating AI technologies. However, for it to truly effectuate a Brussels Effect 2.0, it must address existing challenges and work towards creating a cohesive, universally accepted framework. By doing so, the EU can position itself as a leader in fostering legitimate and effective governance in the realm of artificial intelligence.