EU’s AI Act Faces Guidance Gaps as Prohibitions Loom

EU Lacks Guidance on AI Prohibitions as Rules Start to Apply

The AI Act is set to bring significant changes to the regulation of artificial intelligence systems in the European Union, with rules on banned AI systems, including facial recognition, coming into effect on February 2. However, concerns are mounting regarding the lack of guidance from the European Commission on these prohibitions as the implementation date approaches.

Concerns from Civil Society

As the starting date for provisions of the AI Act nears, civil society groups are expressing their worries over the absence of clear guidelines on prohibited AI technologies. While companies have until mid-next year to align their policies with most of the AI Act’s provisions, the immediate ban on systems such as social scoring and profiling will take effect sooner. This raises questions about the preparedness of organizations to comply with the new regulations.

Guidelines Development

The AI Office, a unit within the European Commission responsible for overseeing compliance, has announced plans to develop guidelines to assist providers. These documents are expected to be published by early 2025 after a prior consultation on prohibited practices conducted last November. The urgency of this matter is reflected in statements from a spokesperson, who has indicated that the goal is to have the guidelines ready “in time for the entry into application of these provisions on February 2.”

However, the lack of published documents as the deadline approaches has raised alarms. Ella Jakubowska, head of policy at advocacy group EDRi, has voiced her concerns, stating that the absence of interpretive guidelines could foreshadow future enforcement issues regarding the AI Act.

Loopholes in the AI Act

The AI Act includes prohibitions for systems deemed to pose risks due to their potential negative societal impacts. Nevertheless, it also allows for exceptions where the public interest may outweigh potential risks, particularly in contexts such as law enforcement. This raises significant ethical questions about the application of such exceptions.

According to Caterina Rodelli, an EU policy analyst at global human rights organization Access Now, “If a prohibition contains exceptions, it is not a prohibition anymore.” She points out that these exceptions predominantly benefit law enforcement and migration authorities, potentially allowing the use of unreliable and dangerous systems like lie-detectors and predictive policing technologies.

Jakubowska from EDRi echoes these concerns, fearing that companies and governments might exploit loopholes to continue the development and deployment of harmful AI systems. This issue was a major point of contention during the negotiations of the AI Act, where lawmakers called for strict bans on facial recognition technologies.

National Regulatory Framework

The AI Act is designed with an extra-territorial scope, meaning that companies outside of the EU can still be subject to its provisions. Non-compliance can lead to fines of up to 7% of a company’s global annual turnover, emphasizing the Act’s reach and implications for international businesses.

Most provisions of the AI Act will come into effect next year, prompting the need for the development of standards and guidance to ensure compliance. Meanwhile, member states are tasked with establishing their national regulatory bodies by August of this year to oversee the implementation of the Act. Some countries have already begun preparations, assigning data protection or telecom authorities with oversight responsibilities.

Jakubowska notes, “This seems to be a bit of a patchwork, with little to nothing known in several countries about either the market surveillance authorities or the notified bodies that will oversee the rules nationally.” This highlights the ongoing challenges and uncertainties surrounding the enforcement of the AI Act across the EU.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...