Understanding the Implications of the AI Act for Businesses

Risk & Governance Legal Newsletter (February 2025)

On July 12, 2024, the EU officially published Regulation (EU) 2024/1689, known as the Artificial Intelligence Act (the “AI Act”). This regulation marks the world’s first comprehensive legal framework governing AI.

Discussions on the AI Act had been ongoing since the European Commission initially proposed the legislation on April 21, 2021. The rapid advancement of generative AI and other technologies necessitated continuous updates to the proposal. Ultimately, the AI Act was formally adopted after receiving approval from the Council of the EU on May 21, 2024.

The AI Act entered into force on August 1, 2024, with most of its provisions set to take effect from August 2, 2026. Certain provisions, however, will be enforced at different times as appropriate. The AI Act applies not only to companies within the EU but also broadly to businesses providing AI systems and related services within the EU. As a result, businesses outside the EU cannot overlook the implications of this regulation.

1. Step 1: Identification of AI System

The AI Act primarily regulates AI Systems and general-purpose AI models. Their respective definitions are as follows:

AI System (Article 3(1)): A machine-based system that is designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment, inferring outputs such as predictions, content, recommendations, or decisions from the input it receives.

General-purpose AI model (Article 3(63)): An AI model capable of competently performing a wide range of tasks, trained with a large amount of data using self-supervision at scale, except for AI models used for research, development, or prototyping activities before they are placed on the market.

2. Step 2: Identification of Regulated Persons

The AI Act applies to various stakeholders involved with AI systems. The primary regulated parties are:

  • Providers: Entities placing AI systems on the market or putting them into service in the EU, regardless of their location.
  • Deployers: Entities using AI systems within the EU.

Businesses must identify which category of regulated parties they fall under and verify where the AI systems are being developed, placed on the market, or used.

3. Step 3: Exemption Eligibility

The AI Act provides exemption provisions for certain AI systems. If a system qualifies for an exemption, it will not be subject to regulation. Exemptions primarily include:

  • AI systems used exclusively for military, defense, or national security purposes.
  • AI systems developed for scientific research and development.
  • Deployers who are natural persons using AI systems in a personal non-professional capacity.

4. Step 4: Categorization of AI Systems

The AI Act adopts a risk-based approach, categorizing AI systems based on the risks they pose:

  • Prohibited AI Practice: Systems that could cause significant harm to individuals.
  • High-risk AI System: Regulated systems that require compliance obligations.
  • AI System with Limited Risk: Systems subject to transparency obligations.
  • Other AI Systems: Not subject to regulation under the AI Act.

5. Step 5: Identification of Applicable Regulations

Operators need to take different approaches based on the categories classified in Step 4:

  • Prohibited AI Practice: Banned from being placed on the market, subject to penalties.
  • High-risk AI System: Must comply with several obligations, including risk management and data governance.
  • AI System with Limited Risk: Subject to transparency obligations.

6. Step 6: Preparations for Implementation

The AI Act came into effect on August 1, 2024, with provisions enforcing sequentially. Key dates include:

  • Chapter 1 and Chapter 2: February 1, 2025.
  • High-risk AI system: August 2, 2027.
  • General-purpose AI model: August 2, 2025.

In conclusion, the AI Act establishes a framework intended to ensure that AI technologies are developed and used safely and responsibly within the EU, while also impacting businesses worldwide. It is crucial for companies to understand their obligations under this new regulation to ensure compliance and safeguard their operations.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...