Strengthening AI Regulations: Ireland’s Call for Action on Deepfake Laws

Ireland’s Push for Stricter AI Regulations on Deepfakes

Ireland has been urged to leverage its upcoming EU presidency to advocate for more stringent laws addressing the use of artificial intelligence (AI) in generating and disseminating non-consensual intimate images. This recommendation comes from the AI Advisory Council, which has recently published its findings regarding the alarming proliferation of AI-generated intimate images, including child sexual abuse material (CSAM).

The Context of the Recommendations

The council’s recommendations arise in light of a recent international scandal involving images created using Elon Musk’s AI chatbot Grok, which were circulated on his social media platform, X. While the report does not delve into the specifics of this incident, it highlights the urgent need for regulatory measures.

Current Legal Framework

According to the AI Advisory Council, existing Irish law is “sufficiently robust” to tackle the issues of non-consensual sharing of AI-generated intimate images and the creation and distribution of AI-generated CSAM. However, the council advocates for a more coordinated response across the European Union.

The Call for EU-Wide Harmonization

The report emphasizes that the most effective way to combat the rapid advancement of technologies that enable the large-scale production and distribution of intimate images and CSAM is through a harmonized EU-wide approach. The EU AI Act currently lacks provisions addressing the misuse of AI technologies in this context, which the council suggests needs to be amended.

The council specifically recommends that the Irish government utilize its EU Presidency in the latter half of 2026 to collaborate with other member states. This collaboration should focus on amending Article 5 of the AI Act under the Article 112(1) mechanism, aiming to prohibit AI practices that facilitate the generation of non-consensual intimate images and CSAM.

Additional Recommendations

In addition to advocating for legislative changes, the council proposes the establishment of a national taxonomy of online harms. This taxonomy should include a distinct layer for AI-enabled and automated harm, which would help standardize reporting and policy measures across the board.

The report also outlines two further recommendations:

  • Enhancing support for victim reporting and evidence preservation.
  • Launching a public information campaign to raise awareness about these issues.

Conclusion

The AI Advisory Council, formed by the government in January 2024, consists of prominent legal and technological experts in the field of AI. Their recommendations signify a critical step toward addressing the ethical and legal challenges posed by AI technologies, particularly in safeguarding individual rights against the misuse of AI-generated content.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...