Decoding the EU’s AI Act: Key Guidelines and Prohibitions

Understanding the EU’s AI Act: Key Guidelines and Prohibitions

The EU’s AI Act, published in June 2024, is a significant step toward regulating artificial intelligence within the European Union. However, it will take years before the interpretive guidance from the Court of Justice of the European Union (CJEU) becomes available. In the interim, the European Commission is tasked with publishing guidelines to assist developers and deployers of AI technologies. Although these guidelines are not legally binding, they are crucial for EU countries as they enforce the law and impose penalties on AI developers and deployers.

Current Status of the AI Act

As of February 2, 2025, the first five articles of the AI Act, which include prohibitions, are in effect. It is essential to interpret the newly published guidelines by understanding the timeline of application. These guidelines detail the definition of an AI system and the prohibited practices under the AI Act, aiming to enhance legal clarity.

Definition of an AI System

The guidelines clarify that spreadsheets are not considered AI systems. The definition comprises seven key elements:

  1. A machine-based system
  2. Designed to operate with varying levels of autonomy
  3. May exhibit adaptiveness after deployment
  4. For explicit or implicit objectives
  5. Infers from the input received how to generate outputs
  6. Outputs may include predictions, content, recommendations, or decisions
  7. Can influence physical or virtual environments

Among these, the capability to infer is highlighted as a critical characteristic of AI systems. The guidelines, however, create confusion by excluding certain methods like linear or logistic regression from the definition of AI systems, despite their capacity to infer.

Prohibitions Under the AI Act

The AI Act outlines eight prohibitions that address significant ethical concerns:

  1. Manipulation and deception
  2. Exploitation of vulnerabilities
  3. Social scoring
  4. Individual criminal offense risk assessment and prediction
  5. Untargeted scraping to develop facial recognition databases
  6. Emotion recognition in workplaces and educational institutions
  7. Biometric categorization
  8. Real-time remote biometric identification (RBI)

These prohibitions apply to the use of AI systems, encompassing both intended and unintended misuse. For instance, while the prohibition on real-time RBI specifically targets deployers, other prohibitions apply to providers and operators as well. A clear example of the application of these prohibitions is the recognition that the prohibition on untargeted scraping does not require the database’s sole purpose to be facial recognition, as long as it can be used for that purpose.

Addressing Manipulation and Deception

The AI Act prohibits AI systems that are manipulative or deceptive, even if such effects are not intended by the developers. For instance, an AI system that tailors persuasive messages based on user data could fall under this prohibition if it leads to significant harm.

Biometric Regulations

The prohibition on emotion recognition is explicitly limited to workplaces and educational contexts. Nevertheless, the guidelines indicate that using monitoring systems to assess customer emotions in call centers or supermarkets is not prohibited, despite potential ethical concerns regarding privacy and consent.

Conclusion

The EU’s AI Act represents a formidable effort to regulate AI technologies, addressing critical issues related to ethics and human rights. As the guidelines evolve, stakeholders in the AI community must remain vigilant and adaptable, ensuring compliance while fostering innovation. The prohibitions set forth by the Act underscore the importance of maintaining ethical standards in AI development and deployment.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...