Essential Steps for Compliance with the EU AI Act

First Steps to Compliance: Meeting Early Obligations Under the EU AI Act

The EU Artificial Intelligence (AI) Act, which came into force on 1 August 2024, marks the world’s first comprehensive regulatory framework for AI. While most provisions apply from 2 August 2026, some key requirements, including AI literacy obligations, the definition of AI systems, and bans on prohibited practices, took effect from 2 February 2025. These early milestones signal the beginning of a new regulatory framework for AI across Europe.

To help businesses navigate these early compliance obligations, the European Commission has released two sets of Guidelines in February 2025, covering the definition of AI systems and prohibited AI practices. While these guidelines are not binding, they assist businesses in assessing how these rules affect their AI operations and ensure they are prepared for compliance.

AI Literacy: A Foundational Compliance Requirement

The EU AI Act mandates AI literacy as a fundamental compliance requirement. Organisations deploying AI must ensure that their employees, contractors, and relevant third parties have the necessary skills and knowledge to deploy AI responsibly and manage associated risks.

What AI Literacy Means in Practice

AI literacy is not simply about delivering training programmes. Under the EU AI Act, it is a demonstrable compliance requirement for organisations to ensure all personnel involved in the deployment and oversight of AI systems understand both the technology and its risks. This shift places a greater onus on businesses to focus on meaningful education programmes that demonstrate comprehension and application beyond one-off training sessions.

One of the biggest challenges businesses face is the fast-evolving nature of AI technology. AI literacy programmes should be tailored to reflect sector-specific risks and updated regularly to keep pace with rapid technological developments. Smaller organisations may also find it difficult to allocate resources to comprehensive AI training, while different industries will require tailored approaches to reflect sector-specific risks.

Governance Integration and Regulator Expectations

Rather than treating AI literacy as a standalone obligation, businesses should integrate AI literacy into their existing governance and risk management frameworks. This approach helps organisations build a culture of responsible AI use, enhances AI oversight, improves decision-making, and strengthens stakeholder trust. While failure to implement AI literacy does not attract direct penalties, regulators may consider it when determining fines for broader breaches of the AI Act.

Scope and Prohibited AI Practices: Understanding the Boundaries

The AI system definition is a key pillar of the AI Act, determining which technologies fall under its scope.

Defining an AI System Under the AI Act: What Businesses Need to Know

The EU AI Act provides a lifecycle-based definition of AI, encompassing both development (building phase) and deployment stages (use phase). The Guidelines on AI Systems confirm that, due to the wide variety of AI applications, it is not possible to provide a definitive list of AI systems. Instead, AI is defined through seven key elements:

  1. A machine-based system
  2. Designed to operate with varying levels of autonomy
  3. Exhibiting adaptiveness after deployment
  4. Operating for explicit or implicit objectives
  5. Inferring from inputs to generate outputs
  6. Producing predictions, content, recommendations, or decisions
  7. Influencing physical or virtual environments

However, not all seven elements need to be present for a system to qualify as AI under the Act. The definition reflects the complexity and diversity of AI systems while ensuring alignment with the AI Act’s objectives.

Organisations should note that this definition should not be applied mechanically. Each system must be assessed individually based on its specific characteristics. While many AI systems will meet the definition set out in the AI Act, not all will be subject to regulation. Ultimately, the Court of Justice of the European Union will be responsible for authoritative interpretations of AI system classification.

Prohibited AI Practices: What’s Off-Limits?

Article 5 of the AI Act outlines AI practices that pose unacceptable risks to fundamental rights, public safety, and democratic values. These prohibitions will be reviewed annually by the European Commission, allowing the list to evolve alongside technological developments.

While some prohibitions mainly target governments and law enforcement, others have direct implications for businesses. Two of the most significant restrictions affecting commercial AI applications are the exploitation of vulnerabilities and social scoring.

Exploiting Vulnerabilities (Article 5(1)(b))

AI systems that intentionally exploit individuals’ vulnerabilities based on age, disability, or socioeconomic status, especially of children or at-risk individuals, resulting in significant harm, are strictly prohibited. The Guidelines on Prohibited AI define vulnerabilities broadly, covering cognitive, emotional, and physical susceptibilities.

A key example is AI-powered toys designed to manipulate children into engaging in risky behaviour, such as spending excessive time online or making unsafe decisions. Another example includes addictive AI-driven mechanisms, such as reinforcement schedules that exploit dopamine loops to increase user engagement.

Social Scoring (Article 5(1)(c))

Social scoring refers to AI systems that evaluate or classify individuals based on their social behaviour, personal characteristics, or inferred traits, leading to detrimental or disproportionate treatment.

This prohibition applies in two cases: (1) when social scoring results in negative consequences in an unrelated context, such as using an individual’s financial spending habits to determine their employability; and (2) when the consequences of social scoring are disproportionate to the behaviour being assessed.

The Guidelines on Prohibited AI and recent case law illustrate how these prohibitions will be applied in practice. Profiling individuals using AI-driven evaluation systems may fall under prohibited AI if all the necessary conditions are met. However, while most business AI-powered scoring and evaluation models will fall outside the scope, organisations should scrutinise data use and scoring criteria to avoid unintentional breaches.

For example, a company specialising in business-related data analytics, credit scoring, and risk assessments is unlikely to be considered to be engaging in prohibited social scoring under the AI Act. Its models assess the financial health of companies and individuals for legitimate commercial purposes, rather than evaluating individuals based on social behaviour or personality traits. On the other hand, an insurance company collecting banking transaction data unrelated to life insurance eligibility and using it to adjust premium pricing could be engaging in prohibited social scoring. The Guidelines recognise that while certain scoring practices are prohibited, many AI-enabled evaluation models do not meet the criteria for social scoring, meaning most business-oriented AI-driven scoring practices will fall outside the scope of this prohibition.

Conclusion: Preparing for Compliance

Businesses should assess whether their AI systems fall within the scope of the AI Act, evaluate their AI literacy programmes, and review their AI-powered tools for potential risks related to exploitation of vulnerabilities or social scoring. Given the annual updates to the list of prohibited practices, businesses will also need to monitor regulatory developments closely to stay compliant.

While the AI Act presents new regulatory challenges, it also offers a framework for responsible AI governance. Businesses that take a proactive approach to compliance—by integrating AI literacy into governance frameworks, evaluating AI risk, and ensuring responsible deployment—will not only mitigate legal exposure but also achieve AI governance maturity, strengthen consumer trust, and enhance their competitive positioning in an AI-driven economy.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...