Understanding the Impact of the EU AI Act on Your Business

When Does the EU AI Act Come Into Force, and What Does This Mean for Your Business?

The EU’s AI Act became official EU law on August 1st, 2024, but the first measures around prohibited uses of AI and AI literacy programs for in-scope businesses come into effect on February 2nd, 2025.

In this article, we’ll help you figure out if and how your business is in-scope for the AI Act. By the end, you’ll have an understanding of how the AI Act is relevant for your business, and what you need to do when to be compliant. The penalties for non-compliance can be significant, with maximum fines of up to 7% of a business’s global, annual revenue or 35 million EUR, whichever is larger.

What is AI?

Artificial intelligence isn’t one thing, it’s many things. The AI Act takes the same approach, referring to an AI as a “family of technologies.”

An AI system is defined as (emphasis added):

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The autonomy characteristic by itself does not distinguish AI from other software. That a computer will perform tasks without human intervention is the reason businesses have invested in software and hardware for decades. Nevertheless, AI systems are capable of higher degrees of autonomy than regular software, due to the fact that it “infers” outputs.

The mention of AI systems “inferring” outputs from inputs is a key characteristic of AI compared to normal software. Normal software can also be extremely complicated, but unlike with AI, each operation can be broken into explicit steps of input and output, with explicit rules of how to create output from input.

What Businesses are In Scope for the AI Act?

If your business is neither providing nor using AI in any sense, then you can safely ignore the AI Act. Natural persons (as opposed to legal persons such as corporations) who use AI in a non-professional capacity are also excluded.

Pure research institutions are out of scope for the AI Act. This exclusion also holds for businesses developing AI before they put their AI on the external market or in internal service.

Providers of AI are companies that develop and make available AI tools. It doesn’t matter whether the AI system provided is used internally within a company or by external customers.

Deployers of AI systems are companies that use AI within their business’s processes or products. If a deployer of AI makes material changes to the way the provided AI functions, it becomes responsible for meeting provider requirements for those changes.

Your business is geographically in-scope for the AI Act if you are based in Europe, or your AI services are being used in Europe.

My Business is In Scope for the AI Act. Now What?

The AI Act’s requirements for businesses and governmental agencies go into effect in different batches, with the next batch coming up on February 2nd, 2025 covering specifically prohibited AI systems and quite broad AI literacy requirements.

The AI Literacy Requirement Casts a Wide Net

The AI Literacy requirement applies to all businesses providing or deploying AI. Any usage of AI in a professional capacity makes your business a “deployer,” thus requiring AI Literacy training for employees.

The AI Act’s Risk-Based Approach

The risk classifications of the AI Act fall into two main areas, which can be loosely called the “what” risks and the “how” risks. The “what” risks relate to the task or tasks the AI system is set to do, while the “how” risks focus on how the AI system interacts with its end-users.

The “What” Risk Buckets of the EU AI Act

The “what” risk categories split according to how bad an AI system gone wrong would be for health, safety, or fundamental rights. The categories are “prohibited,” “high-risk,” and “minimal-risk.”

Prohibited AI systems are deemed incompatible with the health, safety, or fundamental rights of EU residents. Examples include:

  • Social scoring outside of the context in which the data was gathered.
  • Manipulative AI, such as generating fake reviews or creating subliminal messages.
  • Predictive crime AI systems.
  • Remote identification of people except under certain narrow, law-enforcement situations.

High-risk AI systems are those where jobs being done by AI systems have recognized potential for both improving life for EU residents, but also present risks to health, safety, or fundamental rights. Examples include:

  • AI bank loan decisions.
  • AI decisions on access to health or life insurance.
  • Using AI for employment decisions such as hiring or promotions.
  • AI used in product safety or infrastructure.

The final category based on what the AI system is doing is minimal-risk, which means that the potential downsides of using the AI system are sufficiently small that regulating these would stifle innovation without providing sufficient additional protection to EU residents.

The “How” Risk Buckets of AI Act: Transparency and General-Purpose AI

The “how” risks fall into two groups: transparency risk and systemic risk from general-purpose AI. The transparency risk focuses on whether users could confuse AI-produced outputs for human-generated outputs.

General-purpose AI can solve numerous different tasks without requiring task-specific adaptations. However, its novelty presents a systemic risk, as failures of a single model can propagate across various domains.

What Your Business Needs to Do When: A Summary

Now that you have an overview of the most important definitions and concepts from the EU’s AI Act, here’s a summary of what your business has to do when:

  • February 2nd, 2025: AI Literacy and prohibited AI systems requirements go into effect.
  • August 2nd, 2025: Requirements for general-purpose AI enter into force.
  • August 2nd, 2026: All remaining requirements, including high-risk and transparency-risk AI systems go into effect.
  • February 2nd, 2027: Requirements for high-risk AI systems that are embedded in already regulated products enter into force.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...