EU Lacks Guidance on AI Prohibitions as Rules Start to Apply
The AI Act is set to bring significant changes to the regulation of artificial intelligence systems in the European Union, with rules on banned AI systems, including facial recognition, coming into effect on February 2. However, concerns are mounting regarding the lack of guidance from the European Commission on these prohibitions as the implementation date approaches.
Concerns from Civil Society
As the starting date for provisions of the AI Act nears, civil society groups are expressing their worries over the absence of clear guidelines on prohibited AI technologies. While companies have until mid-next year to align their policies with most of the AI Act’s provisions, the immediate ban on systems such as social scoring and profiling will take effect sooner. This raises questions about the preparedness of organizations to comply with the new regulations.
Guidelines Development
The AI Office, a unit within the European Commission responsible for overseeing compliance, has announced plans to develop guidelines to assist providers. These documents are expected to be published by early 2025 after a prior consultation on prohibited practices conducted last November. The urgency of this matter is reflected in statements from a spokesperson, who has indicated that the goal is to have the guidelines ready “in time for the entry into application of these provisions on February 2.”
However, the lack of published documents as the deadline approaches has raised alarms. Ella Jakubowska, head of policy at advocacy group EDRi, has voiced her concerns, stating that the absence of interpretive guidelines could foreshadow future enforcement issues regarding the AI Act.
Loopholes in the AI Act
The AI Act includes prohibitions for systems deemed to pose risks due to their potential negative societal impacts. Nevertheless, it also allows for exceptions where the public interest may outweigh potential risks, particularly in contexts such as law enforcement. This raises significant ethical questions about the application of such exceptions.
According to Caterina Rodelli, an EU policy analyst at global human rights organization Access Now, “If a prohibition contains exceptions, it is not a prohibition anymore.” She points out that these exceptions predominantly benefit law enforcement and migration authorities, potentially allowing the use of unreliable and dangerous systems like lie-detectors and predictive policing technologies.
Jakubowska from EDRi echoes these concerns, fearing that companies and governments might exploit loopholes to continue the development and deployment of harmful AI systems. This issue was a major point of contention during the negotiations of the AI Act, where lawmakers called for strict bans on facial recognition technologies.
National Regulatory Framework
The AI Act is designed with an extra-territorial scope, meaning that companies outside of the EU can still be subject to its provisions. Non-compliance can lead to fines of up to 7% of a company’s global annual turnover, emphasizing the Act’s reach and implications for international businesses.
Most provisions of the AI Act will come into effect next year, prompting the need for the development of standards and guidance to ensure compliance. Meanwhile, member states are tasked with establishing their national regulatory bodies by August of this year to oversee the implementation of the Act. Some countries have already begun preparations, assigning data protection or telecom authorities with oversight responsibilities.
Jakubowska notes, “This seems to be a bit of a patchwork, with little to nothing known in several countries about either the market surveillance authorities or the notified bodies that will oversee the rules nationally.” This highlights the ongoing challenges and uncertainties surrounding the enforcement of the AI Act across the EU.