EU AI Act: Key Changes and Compliance Steps

EU AI Act Begins to Take Effect: Key Insights and Preparations

On February 2, 2025, the first chapters of the European Union’s AI Act officially came into force. This legislation introduces significant provisions related to prohibited AI practices and AI literacy, impacting both providers (developers) and deployers (users, excluding personal and non-professional activities) of AI systems. This article outlines the essential implications of the Act and provides practical steps for compliance.

Prohibited AI Practices Under the EU AI Act

Article 5 of the EU AI Act enumerates certain AI practices that are deemed unacceptable due to their potential risks to EU values and fundamental rights. These prohibitions are critical for understanding the boundaries of AI deployment within the EU.

The legislation specifically prohibits:

  • Harmful manipulation or deception: AI systems are banned from using subliminal or manipulative techniques that distort human behavior, impairing informed decision-making. For instance, using AI-powered rapid image flashes to influence purchasing decisions is considered a banned practice.
  • Exploitation of individuals: Targeting vulnerable individuals based on age, disability, or socioeconomic status in ways that could harm them is prohibited. An example includes using AI to target older individuals with unnecessary medical treatments.
  • Social scoring: Utilizing AI to classify individuals based on their social behavior, resulting in unjustifiable detrimental treatment, is banned. A relevant example is a social welfare agency using AI to estimate the likelihood of benefit fraud in a way that unfairly impacts individuals.
  • Predictive policing: The Act prohibits predicting criminal behavior solely based on profiling or personality assessment unless it supports an existing human assessment.
  • Facial image scraping: AI systems cannot create or expand facial recognition databases through untargeted scraping of images from the internet.
  • Emotion recognition: Identifying emotions in professional or educational settings is prohibited unless for specific medical or safety purposes.
  • Biometric categorization: Using biometric data to infer sensitive information, such as race or political beliefs, is not permitted.
  • Real-time biometric identification for law enforcement: Such use is only permitted in strictly necessary circumstances, such as searching for missing persons or preventing imminent threats.

These prohibitions apply regardless of whether the harmful effect was intended. Stakeholders must be aware of the potential consequences of AI systems, considering both intended and unintended outcomes.

Understanding AI Literacy Requirements

Article 4 of the Act mandates that providers and deployers take measures to ensure that their staff possess a sufficient level of AI literacy. This requirement applies to all AI systems, not just those categorized as ‘high risk.’

The concept of AI literacy remains somewhat ambiguous. Recital 20 emphasizes the need for stakeholders to make informed decisions regarding the development and operational use of AI systems. However, the lack of clear methodology for determining what constitutes sufficient AI literacy complicates compliance.

Businesses are encouraged to consult resources and practices that promote AI literacy while awaiting further guidance from the European Artificial Intelligence Board and EU Member States.

Enforcement and Practical Steps for Compliance

The EU AI Act outlines significant penalties for non-compliance, particularly concerning prohibited AI practices, which can result in fines of up to €35 million or 7% of annual global turnover. Enforcement mechanisms will come into effect on August 2, 2025, providing organizations with time to prepare.

To ensure compliance with Articles 4 (AI Literacy) and 5 (Prohibited AI Practices), organizations should consider the following steps:

  • Inventory of AI Systems: Businesses should assess all AI systems’ risks and benefits, ensuring their usage does not violate prohibited practices.
  • AI Literacy Resources: Develop training programs and policies focused on responsible AI usage to bolster compliance efforts.
  • Tailored Training Programs: Create base-level education for all staff, with specialized training for heavy users of AI systems.
  • AI Governance Policies: Implement governance frameworks to regulate AI development and deployment.
  • Contractual Requirements: Ensure that vendors warrant compliance with relevant legal standards concerning AI systems.
  • Transparency and Accountability: Document the purpose, data sources, and decision-making processes associated with AI systems in use.

Organizations that proactively engage in compliance efforts now will likely be better positioned to navigate the evolving landscape of international AI regulation.

More Insights

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...

New Safeguard Tiers for Responsible AI in Amazon Bedrock

Amazon Bedrock Guardrails now offers safeguard tiers, allowing organizations to implement customizable safety controls for their generative AI applications. This tiered approach enables companies to...

Texas Takes Charge: New AI Governance Law Enacted

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, asserting the state's right to legislate on consumer protection and AI use. The law aims...

Tech Giants Push Back: Delaying the EU’s AI Act

Meta and Apple are urging the European Union to delay the implementation of its landmark AI Act, citing concerns that the current timeline may hinder innovation and overwhelm businesses. As the...

AI Regulation Landscape in Colombia: Current Status and Future Prospects

Despite congressional activity on AI in Colombia, regulation remains unclear and uncertain. More than 20 bills intended to regulate AI have been proposed, but none have been approved as of now...