Understanding the Impact of the EU AI Act on Your Business

When Does the EU AI Act Come Into Force, and What Does This Mean for Your Business?

The EU’s AI Act became official EU law on August 1st, 2024, but the first measures around prohibited uses of AI and AI literacy programs for in-scope businesses come into effect on February 2nd, 2025.

In this article, we’ll help you figure out if and how your business is in-scope for the AI Act. By the end, you’ll have an understanding of how the AI Act is relevant for your business, and what you need to do when to be compliant. The penalties for non-compliance can be significant, with maximum fines of up to 7% of a business’s global, annual revenue or 35 million EUR, whichever is larger.

What is AI?

Artificial intelligence isn’t one thing, it’s many things. The AI Act takes the same approach, referring to an AI as a “family of technologies.”

An AI system is defined as (emphasis added):

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The autonomy characteristic by itself does not distinguish AI from other software. That a computer will perform tasks without human intervention is the reason businesses have invested in software and hardware for decades. Nevertheless, AI systems are capable of higher degrees of autonomy than regular software, due to the fact that it “infers” outputs.

The mention of AI systems “inferring” outputs from inputs is a key characteristic of AI compared to normal software. Normal software can also be extremely complicated, but unlike with AI, each operation can be broken into explicit steps of input and output, with explicit rules of how to create output from input.

What Businesses are In Scope for the AI Act?

If your business is neither providing nor using AI in any sense, then you can safely ignore the AI Act. Natural persons (as opposed to legal persons such as corporations) who use AI in a non-professional capacity are also excluded.

Pure research institutions are out of scope for the AI Act. This exclusion also holds for businesses developing AI before they put their AI on the external market or in internal service.

Providers of AI are companies that develop and make available AI tools. It doesn’t matter whether the AI system provided is used internally within a company or by external customers.

Deployers of AI systems are companies that use AI within their business’s processes or products. If a deployer of AI makes material changes to the way the provided AI functions, it becomes responsible for meeting provider requirements for those changes.

Your business is geographically in-scope for the AI Act if you are based in Europe, or your AI services are being used in Europe.

My Business is In Scope for the AI Act. Now What?

The AI Act’s requirements for businesses and governmental agencies go into effect in different batches, with the next batch coming up on February 2nd, 2025 covering specifically prohibited AI systems and quite broad AI literacy requirements.

The AI Literacy Requirement Casts a Wide Net

The AI Literacy requirement applies to all businesses providing or deploying AI. Any usage of AI in a professional capacity makes your business a “deployer,” thus requiring AI Literacy training for employees.

The AI Act’s Risk-Based Approach

The risk classifications of the AI Act fall into two main areas, which can be loosely called the “what” risks and the “how” risks. The “what” risks relate to the task or tasks the AI system is set to do, while the “how” risks focus on how the AI system interacts with its end-users.

The “What” Risk Buckets of the EU AI Act

The “what” risk categories split according to how bad an AI system gone wrong would be for health, safety, or fundamental rights. The categories are “prohibited,” “high-risk,” and “minimal-risk.”

Prohibited AI systems are deemed incompatible with the health, safety, or fundamental rights of EU residents. Examples include:

  • Social scoring outside of the context in which the data was gathered.
  • Manipulative AI, such as generating fake reviews or creating subliminal messages.
  • Predictive crime AI systems.
  • Remote identification of people except under certain narrow, law-enforcement situations.

High-risk AI systems are those where jobs being done by AI systems have recognized potential for both improving life for EU residents, but also present risks to health, safety, or fundamental rights. Examples include:

  • AI bank loan decisions.
  • AI decisions on access to health or life insurance.
  • Using AI for employment decisions such as hiring or promotions.
  • AI used in product safety or infrastructure.

The final category based on what the AI system is doing is minimal-risk, which means that the potential downsides of using the AI system are sufficiently small that regulating these would stifle innovation without providing sufficient additional protection to EU residents.

The “How” Risk Buckets of AI Act: Transparency and General-Purpose AI

The “how” risks fall into two groups: transparency risk and systemic risk from general-purpose AI. The transparency risk focuses on whether users could confuse AI-produced outputs for human-generated outputs.

General-purpose AI can solve numerous different tasks without requiring task-specific adaptations. However, its novelty presents a systemic risk, as failures of a single model can propagate across various domains.

What Your Business Needs to Do When: A Summary

Now that you have an overview of the most important definitions and concepts from the EU’s AI Act, here’s a summary of what your business has to do when:

  • February 2nd, 2025: AI Literacy and prohibited AI systems requirements go into effect.
  • August 2nd, 2025: Requirements for general-purpose AI enter into force.
  • August 2nd, 2026: All remaining requirements, including high-risk and transparency-risk AI systems go into effect.
  • February 2nd, 2027: Requirements for high-risk AI systems that are embedded in already regulated products enter into force.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...