Regulating AI-Generated Explicit Images of Children in Maine
Child sexual abuse material has been illegal under both state and federal law for decades. However, advancements in Artificial Intelligence have created significant loopholes that are now being addressed in Maine.
Recent Case Highlighting the Issue
In the summer of 2025, a man in Maine manipulated pictures of children playing soccer into sexually explicit images using AI. Although law enforcement identified him, they could not charge him with a crime due to existing Maine laws, as reported by the Bangor Daily News.
Proposed Legislative Changes
To address such loopholes, a proposal known as LD 524, sponsored by Rep. Amy Kuhn (D-Falmouth), seeks to amend the criminal code. This bill will expand the definition of disseminating child sexual abuse material to include materials created or modified through generative AI or machine learning.
Rep. Kuhn emphasized the urgency of updating laws, stating, “It’s not a hypothetical anymore. This technology has created new ways to victimize children, and our laws haven’t been updated to adequately protect against that.”
Background on Legislative Efforts
Last year, lawmakers passed a diluted version of the law, LD 1944, which added morphed images as a form of harassment under the “revenge porn” law. However, the more comprehensive proposal is gaining bipartisan support. The Judiciary Committee unanimously advanced the bill during the initial week of the legislative session. The Criminal Justice and Public Safety Committee also agreed on the moderate impact of the proposed changes on Maine’s criminal justice system.
Statewide and National Context
As of now, 28 states have banned the creation of AI-generated child sexual abuse material, including neighboring New Hampshire, where such actions became a Class B felony on January 1, 2025.
Despite a federal executive order from former President Donald Trump aimed at preventing states from enacting their own AI regulations, states continue to take action. This order represents a significant shift from the typical federalist approach in the U.S., likely leading to legal challenges.
Law Enforcement Perspectives
Maine State Police Lt. Jason Richards, with over 21 years of experience in investigating child sexual abuse, noted that the ways in which children can be victimized have increased dramatically due to AI. He stated, “AI has been used to exploit children in ways never seen before.” Richards also expressed concerns about the difficulty of removing explicit images from the internet, saying, “You can’t really put the genie back in the bottle.”
Statistics on AI-Generated Abuse Material
In 2024, the National Center for Missing and Exploited Children received 67,000 reports of AI-generated child sexual abuse material. This number skyrocketed to 485,000 reports by the first half of 2025, reflecting a staggering 624% increase.
Controversial Technologies
The AI chatbot Grok, developed by Elon Musk, has faced backlash for generating sexualized deepfakes of both adults and children. Despite this controversy, U.S. officials continue to endorse its use. Recently, Defense Secretary Pete Hegseth announced that Grok will operate within Pentagon networks.
Support and Opposition to the Bill
The proposed bill has garnered support from various organizations, including the Maine Chiefs of Police Association and the Maine Coalition Against Sexual Assault. However, the Maine Association of Criminal Defense Lawyers opposes the bill, arguing that its broad definition of “child sexual abuse material” risks violating First Amendment protections. They cite a 2002 U.S. Supreme Court ruling that struck down parts of a federal child pornography law, asserting that graphic manipulations of images, even if they imply child exploitation, may be considered protected speech.
As Maine moves forward with legislative efforts to regulate AI-generated explicit images of children, the implications of such actions will be closely monitored, both at the state and national levels.