Ireland Publishes Blueprint for National AI Enforcement
The Irish Government has published the General Scheme of the Regulation of Artificial Intelligence Bill 2026, marking a significant development in Irish AI regulation.
Transformation of the EU AI Act
This legislative blueprint transforms the EU Artificial Intelligence Act into an operational Irish enforcement system capable of imposing penalties reaching 7% of worldwide turnover for the most serious violations. For businesses operating in Ireland as providers, deployers, distributors, or importers of AI systems, this publication provides the first detailed view of the legislative and regulatory architecture that will govern AI compliance.
Legislative Process
The publication of the Scheme is the first step in Ireland’s legislative process. The Scheme will now undergo pre-legislative scrutiny before the Government drafts the formal Bill for introduction to the Oireachtas. The statutory establishment day for the AI Office must occur on or before 1 August 2026, driven by the EU AI Act’s implementation timeline. The period between now and August 2026 represents a window for building compliance capability before enforcement actions commence.
Distributed Enforcement with Central Coordination
Ireland has chosen a distinctive regulatory model that empowers thirteen existing sectoral authorities to supervise AI systems within their domains while establishing a new central body to coordinate the national approach. This distributed model reflects how AI touches virtually every regulated sector. The sectoral regulators already supervising these industries possess the domain expertise necessary to understand how AI systems function within their specific contexts.
Market Surveillance Authorities
The Scheme identifies the relevant market surveillance authorities for different sectors, such as:
- The Central Bank of Ireland will supervise AI in regulated financial services.
- Coimisiún na Meán will oversee AI in audiovisual media services.
- The Commission for Regulation of Utilities will handle energy sector applications.
- The Workplace Relations Commission will supervise AI systems used in employment contexts.
- The Data Protection Commission continues its role in protecting fundamental rights related to personal data.
- The HSE will have market surveillance responsibility for certain high-risk AI uses in essential public health services and emergency triage.
Companies operating across multiple sectors may face supervision from several different authorities, each with its own institutional culture and supervisory approach.
Oifig Intleachta Shaorga na hÉireann – The AI Office of Ireland
To prevent fragmentation and ensure consistency across this distributed landscape, the Scheme proposes establishing a new statutory body called the AI Office of Ireland. This body corporate will have independent statutory powers, governed by a Chief Executive Officer and a seven-member board appointed by the Minister for Enterprise, Tourism, and Employment.
Statutory Functions
The Office will serve as Ireland’s Single Point of Contact under Article 70(2) of the EU AI Act, becoming the primary interface between Irish-based businesses and the European Commission on AI regulatory matters. Its statutory functions include:
- Facilitating consistent enforcement across the thirteen sectoral authorities.
- Maintaining a centralized pool of technical experts for assessing complex AI systems.
- Compiling and sharing data on AI incidents and compliance issues.
- Representing Ireland at EU AI Board meetings.
The Office will also establish and operate a national AI Regulatory Sandbox, providing businesses, particularly SMEs and startups, with a controlled environment to test innovative AI systems under regulatory supervision before full market deployment.
Enforcement Powers and Classification Challenges
The enforcement toolkit provided to market surveillance authorities is extensive, mirroring those contained in the EU’s Market Surveillance Regulation. Authorities can:
- Require documentation relevant to demonstrating conformity with the AI Act.
- Conduct announced and unannounced on-site inspections.
- Obtain product samples through “cover identity” operations.
- Test AI systems and require access to embedded software.
For online distribution, authorities can require content removal or restriction of access where AI systems present risks or violate regulatory requirements. Most concerning for technology providers is the power of the market surveillance authorities to require access to source code, although it remains unclear whether this includes model parameters, weights, and system prompts.
The Sanctions Regime
The administrative sanctions regime proposed in Part 5 of the Scheme creates financial exposure at a scale that places AI compliance in the same risk tier as GDPR enforcement. For prohibited AI practices under Article 5 of the EU AI Act, the maximum administrative fine reaches either 35 million euros or 7% of total worldwide annual turnover, whichever sum is higher.
The sanctions process provides substantial procedural safeguards. Enforcement proceedings begin with a notice of suspected non-compliance, followed by a notice period for written representations. Matters proceeding to formal adjudication are heard by independent adjudicators nominated by the AI Office and appointed by the Minister. Administrative sanctions do not take effect until confirmed by the High Court.
Practical Implications for Business
The distributed regulatory model means the first compliance question is jurisdictional, requiring businesses to understand which sectoral authority will supervise their AI use case. The power for authorities to challenge risk classifications makes defensible documentation essential. Companies should maintain clear records demonstrating why their AI systems do or do not fall within high-risk categories under Annex III of the EU AI Act.
Post-market monitoring requirements and serious incident reporting obligations create ongoing compliance responsibilities that extend well beyond initial system deployment. Early investment in building compliance capability before enforcement actions commence is considerably more cost-effective than reactive responses following regulatory intervention.