Ireland’s Uncertainty on AI Act Enforcement

Introduction

As the deadline approaches for Ireland to inform the European Commission about the responsible regulators for the EU AI Act, significant uncertainty looms. Despite the impending requirement, the State has yet to designate these crucial authorities, raising concerns among regulators, citizens, civil society, and businesses alike.

The Urgency of Designating Regulators

In less than three weeks, specifically by July 2, 2025, Ireland is expected to identify the “market surveillance authorities” (MSAs) tasked with enforcing the prohibitions under the AI Act. This designation is critical not only for compliance with EU law but also for ensuring the safety and regulation of AI technologies within the country.

Opportunity for Reform

For Ireland, which will soon assume the Presidency of the European Council, the AI Act represents a pivotal opportunity to address past issues related to tax, technology, and data enforcement scandals. The State is poised to focus on AI during its upcoming EU Presidency, yet its inadequate enforcement record raises concerns about future compliance unless immediate action is taken.

Concerns About AI Regulation

On July 3, the Irish Council for Civil Liberties (ICCL) reached out to the Minister for Enterprise, Tourism, and Employment, urging rapid designation of the responsible MSAs. As enforcement responsibilities are set to commence on August 2, 2025, the urgency of this matter cannot be overstated. Without designated regulators, there is uncertainty about who will monitor AI systems, such as TikTok’s recommender systems, which have been shown to significantly harm children.

High-Risk AI Uses

Ireland is also required to appoint regulators for “high-risk” AI applications by August 2. To date, the State has only identified one regulator, the Data Protection Commission, which covers three of the eight high-risk AI categories. For other critical areas, including education, critical infrastructure, and access to essential services, the Minister’s response has been vague, indicating that simply “arrangements are to be finalized.”

Legal Obligations and Resources

EU law mandates that the State must provide these regulators with adequate technical, financial, and human resources, along with necessary infrastructure by the designated deadline. The lack of clarity surrounding the regulators’ identities makes fulfilling this obligation impossible, thereby jeopardizing the overall effectiveness of AI regulation in Ireland.

Conclusion

The current state of uncertainty regarding the enforcement of AI Act prohibitions in Ireland poses a significant risk to both regulatory compliance and public safety. As the deadline approaches, the need for decisive action and clear communication from the State has never been more critical. Stakeholders across the spectrum—regulators, businesses, and citizens—are looking for clarity and accountability in the realm of AI governance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...