AI Regulation: Paving the Way for Trustworthy Technology

Legislation on AI

The AI legislation represents the world’s first comprehensive legal framework addressing the risks associated with artificial intelligence (AI) and positions Europe as a leading player on the global stage.

This AI legislation establishes clear requirements and obligations for developers and deployers of AI concerning specific uses of the technology. Simultaneously, the regulation aims to reduce the administrative and financial burdens on businesses, particularly small and medium enterprises (SMEs).

Part of a broader set of measures to promote trustworthy AI, the legislation also encompasses initiatives supporting innovation in the AI sector. Collectively, these measures ensure the safety and fundamental rights of individuals and businesses regarding AI, while also enhancing adoption, investment, and innovation across the EU.

Why Do We Need AI Rules?

The AI legislation ensures that Europeans can trust the benefits AI has to offer. While most AI systems pose limited or no risk and can help address numerous societal challenges, some applications create risks that must be managed to avoid undesirable outcomes.

For instance, the decision-making process of an AI system can often be opaque, making it hard to assess whether an individual has been unfairly disadvantaged in situations like hiring or public service applications.

Existing legislation offers some protection but is insufficient to tackle the specific challenges posed by AI systems.

Key Objectives of the New Rules:

  • Address specific risks created by AI applications.
  • Prohibit AI practices that present unacceptable risks.
  • Establish a list of high-risk applications.
  • Set clear requirements for high-risk AI systems.
  • Define specific obligations for deployers and providers of high-risk AI applications.
  • Require compliance assessments before the deployment or marketing of AI systems.
  • Implement enforcement measures post-market for AI systems.
  • Establish a governance structure at the European and national levels.

A Risk-Based Approach

The regulatory framework categorizes AI systems into four risk levels:

All AI systems deemed a clear threat to safety, livelihoods, and rights of individuals are prohibited. This includes social scoring by governments and toys using voice assistance that encourage dangerous behavior.

High Risk

AI systems categorized as high risk include technology used in:

  • Critical infrastructures (e.g., transportation) that could endanger citizens’ lives and health.
  • Educational and vocational training, determining access to education and career paths (e.g., exam grading).
  • Safety components of products (e.g., AI applications in robotic surgery).
  • Employment and worker management (e.g., resume screening software).
  • Essential public and private services (e.g., credit scoring preventing citizens from obtaining loans).
  • Law enforcement services interfering with fundamental rights (e.g., reliability assessment of evidence).
  • Migration management (e.g., automated visa application assessments).
  • Judicial administration and democratic processes (e.g., AI solutions for legal decision searches).

High-risk AI systems are subjected to strict obligations before they can be marketed:

  • Adequate systems for risk assessment and mitigation.
  • High-quality datasets to minimize risks and discriminatory outcomes.
  • Activity logging to ensure traceability of results.
  • Detailed documentation to provide necessary information for compliance evaluation.
  • Clear and adequate information from the deployer.
  • Appropriate human oversight measures to minimize risks.
  • High levels of robustness, security, and accuracy.

All remote biometric identification systems are considered high risk, subjected to strict requirements. The use of remote biometric identification in public spaces for law enforcement purposes is generally prohibited, with narrowly defined exceptions.

Limited Risk

Limited risk refers to the risks associated with a lack of transparency in AI use. The legislation introduces specific transparency obligations to ensure that individuals are informed when necessary, fostering trust. For example, when interacting with AI systems such as chatbots, individuals must be informed they are engaging with a machine, allowing them to make informed decisions. Additionally, AI-generated content must be identifiable, and AI-generated texts published to inform the public must be labeled as artificially generated.

Minimal or No Risk

The AI legislation permits the free use of AI with minimal risk, including applications such as AI-compatible video games or anti-spam filters. The vast majority of AI systems currently in use within the EU fall into this category.

Operational Framework for High-Risk AI System Providers

Once an AI system is on the market, authorities are responsible for market surveillance, deployers ensure human monitoring, and providers maintain a post-market monitoring system. Providers and deployers are also required to report serious incidents and malfunctions.

A Solution for Reliable Use of Large AI Models

Increasingly, general-purpose AI models become components of AI systems, capable of performing and adapting to countless tasks.

While general-purpose AI models can enable better and more powerful AI solutions, supervising all their capabilities poses challenges. The AI legislation introduces transparency obligations for all general-purpose AI models to enhance understanding of these models and additional risk management requirements for high-performing and impactful models.

A Forward-Looking Legislation

Given the rapidly evolving nature of AI technology, the regulation adopts a future-proof approach, allowing rules to adapt to technological advancements. AI applications should remain reliable even post-market, necessitating ongoing quality and risk management by providers.

Application and Implementation

The European AI Office, established in February 2024 within the Commission, oversees the application and implementation of AI legislation alongside member states. Its goal is to create an environment where AI technologies uphold human dignity, rights, and trust while fostering collaboration, innovation, and AI research among stakeholders. Additionally, it engages in international dialogue on AI governance, recognizing the need for global alignment.

Next Steps

The AI law took effect on August 1 and will be fully applicable two years later, with a few exceptions: bans will take effect after six months, governance rules and obligations for general-purpose AI models will apply after 12 months, and regulations for AI systems integrated into regulated products will commence after 36 months. To facilitate the transition to the new regulatory framework, the Commission launched a voluntary AI Pact, inviting AI developers in Europe and beyond to comply in advance with the primary obligations outlined in the AI legislation.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...