New Mexico Proposes Groundbreaking Law to Combat Harmful AI Imagery

Leaders Propose Legislation to Regulate Harmful AI-Generated Images

The emerging threat posed by deepfake and pornographic images created with new artificial intelligence tools has prompted two state leaders to advocate for a comprehensive law aimed at protecting the residents of New Mexico. This proposal was outlined by New Mexico Attorney General Raúl Torrez and state Representative Linda Serrato, D-Santa Fe, during a news conference in Albuquerque.

Overview of the Proposed Legislation

The proposed bill, termed the Artificial Intelligence Accountability Act, aims to establish the state’s first legal framework for regulating AI-generated images. It seeks to enhance both civil and criminal penalties for those who violate these regulations. Key elements of the legislation include:

  • Setting technical standards for AI developers.
  • Authorizing the New Mexico Department of Justice to investigate compliance by large tech companies.

Digital Markers and Civil Actions

One significant aspect of the legislation is the requirement for AI companies to incorporate digital markers within their content. This addition will assist in identifying the creators of harmful deepfakes and grants individuals the ability to take civil action against violators.

Torrez emphasized the anonymity that this technology affords, enabling individuals to cause real harm without accountability. The bill would allow victims of malicious video or audio content to pursue civil litigation against the creators, establishing a private right of action for those harmed by unlawful production and dissemination of such materials. Victims could seek actual damages or a penalty of $1,000 per view of the harmful content.

Legislative Timeline and Context

The lawmakers plan to introduce the bill in the 2026 regular session of the Legislature, starting Tuesday. The necessity for this measure was highlighted by a recent case involving an Albuquerque man who allegedly used AI tools to create thousands of sexual exploitation images of children, drawn from social media platforms. This incident marks a significant concern regarding the misuse of AI technology.

Richard Gallagher, 68, was arraigned on 12 felony charges, including the manufacturing and distribution of visual mediums depicting the sexual exploitation of children. This case exemplifies the pressing need for a legal framework to address the ethical implications of AI in image generation.

Stricter Sentences for Offenders

The proposed legislation would also enhance prison sentences by one year for those convicted of using AI to create harmful deepfake images. This aims to serve as a deterrent against such activities.

Community Engagement and Future Directions

Serrato noted her leadership in an AI summit in December, which attracted 140 participants interested in utilizing AI technology for legitimate purposes. However, she recognized the urgent need to tackle the bad actors who misuse this technology, stressing the importance of developing a safe and ethical AI ecosystem in New Mexico.

The proposed legislation represents a significant step in addressing the challenges posed by harmful AI-generated content, aiming to protect individuals and establish clear boundaries for the ethical use of technology.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...