EU AI Act Enforces Prohibited Practices and Mandates AI Literacy

EU AI Act: Prohibited AI Ban and AI Literacy Rules Now in Force

The EU AI Act has officially come into effect, marking a significant milestone in the regulation of artificial intelligence within the European Union. With the passing of the first major compliance deadline on February 2, 2025, regulators have implemented a ban on certain high-risk AI applications. Companies developing or deploying AI systems must now ensure that their staff possesses a sufficient level of AI literacy.

Prohibited AI Practices

According to Article 5 of the EU AI Act, specific AI applications that pose unacceptable risks to fundamental rights and core Union values are banned. The law prohibits the sale, deployment, or use of any AI system that:

  • Uses subliminal, manipulative, or deceptive techniques that distort a person’s behavior or impair their decision-making;
  • Exploits the vulnerabilities of individuals based on age, disability, or socio-economic status;
  • Evaluates individuals based on their social behavior or personality traits, leading to unfair treatment;
  • Predicts the risk of criminal offenses based solely on profiling;
  • Creates facial recognition databases through untargeted scraping of facial images;
  • Infers emotions in individuals at work or in educational settings;
  • Uses biometric categorization to infer sensitive demographic characteristics;
  • Collects “real-time” biometric data in public spaces for law enforcement purposes.

While there is an exemption for the use of AI systems designed to infer emotions in the workplace for medical or safety purposes, this reflects a critical balance between AI innovation and the protection of consumer rights.

Guidance and Clarifications

The European Commission has released Guidelines that clarify the scope of the ban and provide practical examples to aid understanding of the Act’s applicability. Notably, the term ‘placing on the market’ refers to any availability of an AI system, regardless of the means of supply, including APIs and cloud services.

The ‘putting into service’ aspect covers the supply of AI systems for first use, while ‘use’ encompasses the deployment of an AI system throughout its lifecycle. Importantly, misuse of an AI system falls under the prohibited practices as well.

For instance, manipulative practices by AI systems are prohibited if they fulfill certain conditions: they must constitute placing on the market, involve deceptive techniques, materially distort behavior, and likely cause significant harm to the individual.

AI Literacy Requirements

Effective February 2, 2025, the AI literacy provisions under Article 4 took effect, mandating that all providers and deployers of AI systems ensure their personnel possess a sufficient level of AI literacy. This encompasses the necessary skills, knowledge, and understanding required to make informed decisions regarding AI deployment.

The European Artificial Intelligence Office has established a Living Repository of AI Literacy Practices to promote awareness and education regarding AI literacy.

Next Steps for Organizations

Organizations must first assess whether they are using any AI applications prohibited under Article 5. If so, they should engage stakeholders to phase out these AI systems and discontinue their use. Establishing procedures for identifying future AI initiatives that may intersect with the ban is advisable, alongside implementing employee training to ensure compliance across teams.

Even if specific prohibitions do not apply, the AI literacy requirements span a broader range of AI activities, offering an opportunity to develop comprehensive AI governance, training programs, and safeguards against potential misuse.

This regulatory framework aims to ensure that while AI technology advances, it does so in a way that prioritizes ethical considerations and consumer protection.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...