EU’s AI Act Faces Guidance Gaps as Prohibitions Loom

EU Lacks Guidance on AI Prohibitions as Rules Start to Apply

The AI Act is set to bring significant changes to the regulation of artificial intelligence systems in the European Union, with rules on banned AI systems, including facial recognition, coming into effect on February 2. However, concerns are mounting regarding the lack of guidance from the European Commission on these prohibitions as the implementation date approaches.

Concerns from Civil Society

As the starting date for provisions of the AI Act nears, civil society groups are expressing their worries over the absence of clear guidelines on prohibited AI technologies. While companies have until mid-next year to align their policies with most of the AI Act’s provisions, the immediate ban on systems such as social scoring and profiling will take effect sooner. This raises questions about the preparedness of organizations to comply with the new regulations.

Guidelines Development

The AI Office, a unit within the European Commission responsible for overseeing compliance, has announced plans to develop guidelines to assist providers. These documents are expected to be published by early 2025 after a prior consultation on prohibited practices conducted last November. The urgency of this matter is reflected in statements from a spokesperson, who has indicated that the goal is to have the guidelines ready “in time for the entry into application of these provisions on February 2.”

However, the lack of published documents as the deadline approaches has raised alarms. Ella Jakubowska, head of policy at advocacy group EDRi, has voiced her concerns, stating that the absence of interpretive guidelines could foreshadow future enforcement issues regarding the AI Act.

Loopholes in the AI Act

The AI Act includes prohibitions for systems deemed to pose risks due to their potential negative societal impacts. Nevertheless, it also allows for exceptions where the public interest may outweigh potential risks, particularly in contexts such as law enforcement. This raises significant ethical questions about the application of such exceptions.

According to Caterina Rodelli, an EU policy analyst at global human rights organization Access Now, “If a prohibition contains exceptions, it is not a prohibition anymore.” She points out that these exceptions predominantly benefit law enforcement and migration authorities, potentially allowing the use of unreliable and dangerous systems like lie-detectors and predictive policing technologies.

Jakubowska from EDRi echoes these concerns, fearing that companies and governments might exploit loopholes to continue the development and deployment of harmful AI systems. This issue was a major point of contention during the negotiations of the AI Act, where lawmakers called for strict bans on facial recognition technologies.

National Regulatory Framework

The AI Act is designed with an extra-territorial scope, meaning that companies outside of the EU can still be subject to its provisions. Non-compliance can lead to fines of up to 7% of a company’s global annual turnover, emphasizing the Act’s reach and implications for international businesses.

Most provisions of the AI Act will come into effect next year, prompting the need for the development of standards and guidance to ensure compliance. Meanwhile, member states are tasked with establishing their national regulatory bodies by August of this year to oversee the implementation of the Act. Some countries have already begun preparations, assigning data protection or telecom authorities with oversight responsibilities.

Jakubowska notes, “This seems to be a bit of a patchwork, with little to nothing known in several countries about either the market surveillance authorities or the notified bodies that will oversee the rules nationally.” This highlights the ongoing challenges and uncertainties surrounding the enforcement of the AI Act across the EU.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...