Understanding the Role of Authorised Representatives Under the EU AI Act

The “Authorised Representative” Under the EU AI Act

The duty for market participants established outside of the EU to designate an official “representative” on EU territory is a cornerstone of EU digital and product safety regulations. This requirement appears across multiple frameworks including the GDPR, Digital Services Act, NIS2 Directive, the Data Governance Act, and the Medical Devices Regulation. It mandates non-EU businesses operating in the EU to establish a readily accessible point of contact for European stakeholders.

Following this established pattern, the new European Artificial Intelligence Act (AI Act) also mandates that certain operators based outside the EU must appoint an ‘authorised representative.’

Concerned Operators: Providers of High-Risk AI Systems and General-Purpose AI Models

Under the AI Act, two types of operators based outside the EU have to appoint “authorised representatives”: providers of high-risk AI systems and providers of general-purpose AI models (GPAI models). The first category includes all entities that develop and place on the EU market AI systems that are either:

  1. Part of a product covered by certain EU legislation, such as the Machinery Directive or the Medical Device Regulation (the complete list of covered legislation can be found in the Act’s Annex I).
  2. Intended to be used in one of eight designated “high-risk” areas, such as “education and vocational training” or “employment and workers’ management” (the complete list of covered areas can be found in the Act’s Annex III).

GPAI models are defined as AI models that are “trained with a large amount of data using self-supervision at scale, that display significant generality and are capable of competently performing a wide range of distinct tasks and can be integrated into a variety of downstream systems or applications” (Article 3(63)). Large Language Models are prime examples of GPAI models. However, providers that release a GPAI model under a free and open-source license are exempt from the obligation to appoint a representative.

The Set of Duties

Both types of AI providers are subject to a largely identical set of duties (Article 22 and Article 54, respectively). Centrally, they have to appoint, by written mandate, an authorised representative prior to making their system or model available in the EU. This mandate must empower the representative to perform (at least) the following four tasks:

  1. Verify that the provider has drawn up the necessary technical documentation. According to Article 11, before placing a high-risk system on the market, providers must prepare technical documentation demonstrating compliance, including system training data, architecture, and testing procedures (Article 11, Annex IV). Providers of GPAI models are also required to prepare technical documentation.
  2. Keep at the disposal of the competent authorities the provider’s contact details, the technical documentation, a copy of the EU declaration of conformity (only for high-risk systems), and, if applicable, an official certificate. These documents must be available for at least 10 years.
  3. Provide competent authorities, upon a reasoned request, with all the information and documentation necessary to demonstrate the system’s or model’s conformity with the AI Act’s requirements, including the technical documentation and, where applicable, automatically generated logs (Article 21(1)).
  4. Cooperate with competent authorities in any action they take in relation to the AI system or GPAI model. For GPAI model providers, this duty extends to situations where authorities seek to take action against downstream AI systems integrated with the GPAI model.

In the case of high-risk systems, the representative must also ensure that the system is registered in the respective EU database pursuant to Article 49(1).

Moreover, the mandate must empower the representative to be addressed by the competent authorities on all issues relating to the regulation’s enforcement. Upon request, the representative must be able to provide a copy of the mandate in one of the EU’s languages as indicated by the requesting authority. If the representative believes the provider is acting contrary to its obligations under the AI Act, they must terminate the mandate and immediately inform the relevant market surveillance authority.

Sanctions in the Case of Non-Compliance

The failure to appoint a representative constitutes a case of “formal non-compliance” under Article 83(1), which, if unaddressed, shall lead competent authorities to restrict or prohibit the concerned system (Article 83(1)) or GPAI model (Article 93(1)). It may also incur administrative fines of up to EUR 15M or 3% of total worldwide annual turnover (Article 99(4)(b)). Notably, representatives themselves may also be subject to fines as they are included under the term “operator” pursuant to Article 3(5).

Summary and Advice

The obligation to appoint an authorised representative under the AI Act represents a significant compliance requirement for non-EU providers of high-risk AI systems and GPAI models. With severe penalties for non-compliance and the Act’s broad territorial scope, companies providing AI systems or models that may affect EU users should assess their obligations and establish proper representation. These requirements are set to come into force in August 2026.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...