EU AI Act: Key Changes and Compliance Steps

EU AI Act Begins to Take Effect: Key Insights and Preparations

On February 2, 2025, the first chapters of the European Union’s AI Act officially came into force. This legislation introduces significant provisions related to prohibited AI practices and AI literacy, impacting both providers (developers) and deployers (users, excluding personal and non-professional activities) of AI systems. This article outlines the essential implications of the Act and provides practical steps for compliance.

Prohibited AI Practices Under the EU AI Act

Article 5 of the EU AI Act enumerates certain AI practices that are deemed unacceptable due to their potential risks to EU values and fundamental rights. These prohibitions are critical for understanding the boundaries of AI deployment within the EU.

The legislation specifically prohibits:

  • Harmful manipulation or deception: AI systems are banned from using subliminal or manipulative techniques that distort human behavior, impairing informed decision-making. For instance, using AI-powered rapid image flashes to influence purchasing decisions is considered a banned practice.
  • Exploitation of individuals: Targeting vulnerable individuals based on age, disability, or socioeconomic status in ways that could harm them is prohibited. An example includes using AI to target older individuals with unnecessary medical treatments.
  • Social scoring: Utilizing AI to classify individuals based on their social behavior, resulting in unjustifiable detrimental treatment, is banned. A relevant example is a social welfare agency using AI to estimate the likelihood of benefit fraud in a way that unfairly impacts individuals.
  • Predictive policing: The Act prohibits predicting criminal behavior solely based on profiling or personality assessment unless it supports an existing human assessment.
  • Facial image scraping: AI systems cannot create or expand facial recognition databases through untargeted scraping of images from the internet.
  • Emotion recognition: Identifying emotions in professional or educational settings is prohibited unless for specific medical or safety purposes.
  • Biometric categorization: Using biometric data to infer sensitive information, such as race or political beliefs, is not permitted.
  • Real-time biometric identification for law enforcement: Such use is only permitted in strictly necessary circumstances, such as searching for missing persons or preventing imminent threats.

These prohibitions apply regardless of whether the harmful effect was intended. Stakeholders must be aware of the potential consequences of AI systems, considering both intended and unintended outcomes.

Understanding AI Literacy Requirements

Article 4 of the Act mandates that providers and deployers take measures to ensure that their staff possess a sufficient level of AI literacy. This requirement applies to all AI systems, not just those categorized as ‘high risk.’

The concept of AI literacy remains somewhat ambiguous. Recital 20 emphasizes the need for stakeholders to make informed decisions regarding the development and operational use of AI systems. However, the lack of clear methodology for determining what constitutes sufficient AI literacy complicates compliance.

Businesses are encouraged to consult resources and practices that promote AI literacy while awaiting further guidance from the European Artificial Intelligence Board and EU Member States.

Enforcement and Practical Steps for Compliance

The EU AI Act outlines significant penalties for non-compliance, particularly concerning prohibited AI practices, which can result in fines of up to €35 million or 7% of annual global turnover. Enforcement mechanisms will come into effect on August 2, 2025, providing organizations with time to prepare.

To ensure compliance with Articles 4 (AI Literacy) and 5 (Prohibited AI Practices), organizations should consider the following steps:

  • Inventory of AI Systems: Businesses should assess all AI systems’ risks and benefits, ensuring their usage does not violate prohibited practices.
  • AI Literacy Resources: Develop training programs and policies focused on responsible AI usage to bolster compliance efforts.
  • Tailored Training Programs: Create base-level education for all staff, with specialized training for heavy users of AI systems.
  • AI Governance Policies: Implement governance frameworks to regulate AI development and deployment.
  • Contractual Requirements: Ensure that vendors warrant compliance with relevant legal standards concerning AI systems.
  • Transparency and Accountability: Document the purpose, data sources, and decision-making processes associated with AI systems in use.

Organizations that proactively engage in compliance efforts now will likely be better positioned to navigate the evolving landscape of international AI regulation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...