Europe Is Looking To Water Down AI Protections. It Should Reinforce Them.
The EU is currently reevaluating the AI Act amidst ongoing warnings from civil society. The European Commission’s AI Act Omnibus proposal suggests changes that could significantly weaken existing safeguards against AI systems deemed dangerous to health, safety, and fundamental rights. This move has faced widespread criticism and has opened the door for even more extensive changes suggested by various groups within the European Parliament.
Current State of the AI Act
As discussions progress and evidence of AI-related harms continues to mount, proposals have not focused on strengthening pathways for redress under the AI Act, which remains a core weakness of the legislation. Currently, aside from a right to obtain an explanation and a complaints mechanism, the AI Act provides limited tools for individuals to assert their rights in cases of infringement or harm.
While these initial processes are crucial for ensuring access to information and accountability, they do not suffice. Scholars argue that the right to an explanation may be interpreted narrowly, limiting its effectiveness.
Complaints Mechanism and Its Shortcomings
The complaints mechanism within the AI Act is open to any individual or entity, regardless of whether they have been affected. However, it lacks essential procedural safeguards, does not guarantee a response or investigation, and is devoid of judicial oversight. These omissions are intentional, as the Act claims that EU law already provides sufficient remedies for individuals adversely affected by AI systems. The Centre for Democracy and Technology Europe has pointed out that this assumption is misguided.
The Role of GDPR
The General Data Protection Regulation (GDPR), the first law aimed at ensuring transparency and accountability for AI systems, is also facing potential deregulation. As a technology-neutral, rights-based law, the GDPR has established vital guardrails and actionable mechanisms for individuals whose personal data is processed by AI systems.
A significant contribution of the GDPR to the redress landscape is the existence of individual rights against entities that process data, as well as against regulators who fail to enforce the law. However, the complex nature of AI development often obscures the chain of responsibility, complicating enforcement.
Challenges in Redress Mechanisms
Other legal frameworks, such as equality and non-discrimination law, address power imbalances between parties but have limitations. They focus primarily on individual redress without addressing underlying structural inequalities. Moreover, the opacity of algorithms complicates the reversal of the burden of proof, making it difficult for individuals to contest algorithmic decisions.
Potential Improvements to the AI Act
The AI Act’s requirements for documentation and registration could enhance transparency and accountability if implemented robustly. However, current proposals risk simplifying compliance for companies, potentially omitting critical information about high-risk systems.
Collective Redress Frameworks
The Representative Actions Directive in EU consumer protection law has the potential to improve collective redress mechanisms. Yet, the high costs associated with these actions pose challenges for representative entities, and the effectiveness of such actions under the AI Act remains uncertain.
Conclusion: The Need for Stronger Protections
In light of the challenges outlined, it is crucial to fill the gaps left by the AI Act and enhance existing legislation to address AI-specific challenges. This includes strengthening collective action mechanisms, adopting procedural safeguards like a reversal of the burden of proof, and ensuring robust enforcement of compliance.
As the omnibus proposal moves towards inter-institutional negotiations, it is vital for decision-makers to focus on reinforcing the AI Act’s protections rather than weakening them. Safeguards for high-risk AI systems must be prioritized, and loopholes that allow dangerous AI systems to evade scrutiny should be closed. The time for action is now; we should not wait for a scandal to arise to strengthen individual protections.