Ireland’s New AI Regulatory Framework: Key Competent Authorities Designated

Ireland Appoints Its AI Act Competent Authorities

On 4 March 2025, the Irish government approved a recommendation from the Minister for Enterprise, Tourism and Employment, Peter Burke, to implement a distributed regulatory model for the enforcement of the EU Artificial Intelligence (AI) Act. This decision marks a significant step in Ireland’s AI governance framework and aligns with the country’s ambition to position itself as a key center of expertise for digital and data regulation within the EU.

Key Regulatory Designations Under Article 70

The government has designated eight public bodies as national competent authorities under Article 70 of the AI Act. Article 70 requires each EU Member State to establish or designate at least one notifying authority and one market surveillance authority for AI oversight. These competent authorities will be responsible for supervising the implementation of the AI Act within their respective sectors:

  • Central Bank of Ireland (CBI) – Oversight of AI applications in financial services, including algorithmic trading and credit risk assessments.
  • Commission for Communications Regulation (ComReg) – Regulation of AI-driven telecommunications infrastructure and digital communications.
  • Commission for Railway Regulation (CRR) – Supervision of AI use in railway systems, ensuring safety and compliance.
  • Competition and Consumer Protection Commission (CCPC) – Ensuring fair competition and consumer rights in AI-driven markets.
  • Data Protection Commission (DPC) – Overseeing AI applications involving personal data, in alignment with GDPR obligations.
  • Health and Safety Authority (HSA) – Regulating AI applications related to occupational health and workplace safety.
  • Health Products Regulatory Authority (HPRA) – Supervision of AI use in medical devices and pharmaceuticals.
  • Marine Survey Office (Department of Transport) – Oversight of AI applications in maritime transport and safety.

These authorities will be responsible for enforcing AI Act provisions applicable to their respective sectors, ensuring compliance with AI risk classifications, and coordinating with other regulatory bodies to address cross-sectoral challenges. A lead regulator will be designated at a later stage to coordinate enforcement and oversee centralized functions, ensuring coherence in Ireland’s AI regulatory approach.

How the Article 70 Designations Differ from Article 77 Fundamental Rights Authorities

This designation process under Article 70 is distinct from Ireland’s previous appointment of national public authorities under Article 77 of the AI Act, which focuses specifically on the protection of fundamental rights. In November 2024, the government designated nine authorities under Article 77 to supervise and enforce AI’s impact on fundamental rights, including non-discrimination, electoral integrity, and consumer protection.

These authorities, such as the Data Protection Commission (DPC), Coimisiún na Meán, and the Irish Human Rights and Equality Commission (IHREC), have specific powers to access AI documentation, conduct technical assessments, and request compliance testing of high-risk AI systems where fundamental rights concerns arise.

In contrast, the Article 70 authorities are tasked with the broader implementation and enforcement of the AI Act, ensuring compliance across all regulated AI systems, including market surveillance and technical conformity assessments.

Regulatory Obligations and Next Steps

Under Article 70, Member States must:

  • Communicate the identity of their notifying and market surveillance authorities to the European Commission by 2 August 2025.
  • Designate a single point of contact responsible for centralized coordination of AI Act implementation.
  • Ensure that authorities have adequate technical, financial, and human resources, including expertise in AI technologies, data protection, cybersecurity, and fundamental rights.
  • Provide publicly available contact details for competent authorities to facilitate communication with AI providers, deployers, and stakeholders.
  • Report to the European Commission every two years on the adequacy of financial and human resources allocated to national competent authorities.

Additionally, these authorities may provide guidance and advice to businesses, particularly SMEs and startups, to support compliance with the AI Act. This will be essential in ensuring that Irish businesses understand their obligations under the new framework, particularly for high-risk AI applications.

Legal and Compliance Implications for Businesses

For businesses operating AI systems in Ireland, the appointment of these authorities marks the beginning of an active regulatory enforcement phase under the AI Act. Companies developing or deploying AI in regulated sectors should immediately assess their compliance obligations, particularly those whose AI systems are classified as high-risk under the Act.

Key steps for businesses include:

  • Mapping AI Deployments – Identifying whether their AI systems fall under high-risk categories (e.g., employment, financial services, healthcare, law enforcement).
  • Engaging with Competent Authorities – Understanding sector-specific regulatory expectations and guidance provided by the relevant authority.
  • Implementing Compliance Measures – Ensuring technical and procedural compliance with AI Act requirements, including risk assessments, data governance, and transparency obligations.
  • Preparing for Market Surveillance – Ensuring AI systems are well-documented and capable of meeting scrutiny from designated regulatory bodies.

Ireland’s Approach to AI Governance

Ireland’s decision to adopt a distributed model for AI regulation reflects its sectoral regulatory landscape and aims to provide businesses with a clear and familiar compliance pathway. The government has positioned this model as one that will enable effective oversight while fostering innovation. Minister Burke emphasized that Ireland’s approach will ensure AI adoption occurs in a manner that is trustworthy, safe, and aligned with economic growth objectives.

With further designations, including the appointment of a lead regulator, expected in the coming months, businesses operating AI systems in Ireland should stay closely engaged with regulatory developments to ensure compliance with the evolving framework.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...