Ireland’s Push for Stricter AI Regulations on Deepfakes
Ireland has been urged to leverage its upcoming EU presidency to advocate for more stringent laws addressing the use of artificial intelligence (AI) in generating and disseminating non-consensual intimate images. This recommendation comes from the AI Advisory Council, which has recently published its findings regarding the alarming proliferation of AI-generated intimate images, including child sexual abuse material (CSAM).
The Context of the Recommendations
The council’s recommendations arise in light of a recent international scandal involving images created using Elon Musk’s AI chatbot Grok, which were circulated on his social media platform, X. While the report does not delve into the specifics of this incident, it highlights the urgent need for regulatory measures.
Current Legal Framework
According to the AI Advisory Council, existing Irish law is “sufficiently robust” to tackle the issues of non-consensual sharing of AI-generated intimate images and the creation and distribution of AI-generated CSAM. However, the council advocates for a more coordinated response across the European Union.
The Call for EU-Wide Harmonization
The report emphasizes that the most effective way to combat the rapid advancement of technologies that enable the large-scale production and distribution of intimate images and CSAM is through a harmonized EU-wide approach. The EU AI Act currently lacks provisions addressing the misuse of AI technologies in this context, which the council suggests needs to be amended.
The council specifically recommends that the Irish government utilize its EU Presidency in the latter half of 2026 to collaborate with other member states. This collaboration should focus on amending Article 5 of the AI Act under the Article 112(1) mechanism, aiming to prohibit AI practices that facilitate the generation of non-consensual intimate images and CSAM.
Additional Recommendations
In addition to advocating for legislative changes, the council proposes the establishment of a national taxonomy of online harms. This taxonomy should include a distinct layer for AI-enabled and automated harm, which would help standardize reporting and policy measures across the board.
The report also outlines two further recommendations:
- Enhancing support for victim reporting and evidence preservation.
- Launching a public information campaign to raise awareness about these issues.
Conclusion
The AI Advisory Council, formed by the government in January 2024, consists of prominent legal and technological experts in the field of AI. Their recommendations signify a critical step toward addressing the ethical and legal challenges posed by AI technologies, particularly in safeguarding individual rights against the misuse of AI-generated content.