New Mexico’s Landmark Artificial Intelligence Accountability Act to Combat Deepfakes and Synthetic Media

New Mexico Proposes Landmark AI Accountability Act to Combat Deepfakes and Synthetic Media

Alamogordo, NM — New Mexico Attorney General Raúl Torrez and State Representative Linda Serrato (D-Santa Fe) have unveiled a groundbreaking legislative proposal aimed at regulating the growing threats posed by generative artificial intelligence (GenAI) and synthetic media, including deepfakes.

The Artificial Intelligence Accountability Act (AI2A)

The AI2A seeks to establish the state’s first comprehensive framework for overseeing GenAI and synthetic content. The bill would require mandatory digital markers in AI-generated images, audio, and video to ensure transparency, provide free tools for verifying content authenticity, and empower the Attorney General to enforce penalties against violations.

“This legislation addresses a rapidly evolving technology that holds immense promise but also poses real dangers when misused to deceive or harm individuals,” Torrez stated. “New Mexicans deserve protections that promote accountability without stifling innovation or free speech.”

Key Provisions of the AI2A

  • Mandatory Latent Digital Markers — Providers of GenAI services, large online platforms, and device manufacturers must embed imperceptible markers in synthetic content to enable tracking.
  • Provenance Detection Tools — Companies must offer free tools to detect AI-generated content and trace its origin.
  • Civil Enforcement — The Attorney General can investigate violations and impose fines up to $15,000 per violation.
  • Enhanced Criminal Penalties — Using GenAI to commit felonies would add an additional year of imprisonment.
  • Civil Liability — Individuals who knowingly disseminate malicious synthetic content could face lawsuits.

The proposal has drawn support from groups like Mothers Against Media Addiction (MAMA), which highlighted the risks to children and families. Executive Director Julie Scelfo called it “a crucial step to protect young people from the dangers of AI.”

Ties to Similar Legislation Across the U.S.

New Mexico’s AI2A aligns with a nationwide push to regulate deepfakes and synthetic media, as federal action has lagged. Several states have already enacted laws targeting similar issues:

  • California has passed multiple bills, including those requiring disclosures for AI-generated political content, banning sexually explicit deepfakes without consent, and mandating watermarking for synthetic content.
  • Tennessee’s ELVIS Act protects against unauthorized AI-generated replicas of voices and likenesses.
  • Michigan prohibits undisclosed deepfakes in political ads near elections.
  • New York and others have focused on posthumous protections for likeness rights and mandatory disclosures in advertising.

Trackers from organizations like the National Conference of State Legislatures (NCSL) show that nearly all states introduced AI-related bills in recent sessions, with dozens enacted specifically addressing deepfakes. New Mexico’s approach, emphasizing watermarking and enforcement, mirrors emerging trends in states like California and Illinois.

Local Relevance for Alamogordo and Otero County

In southern New Mexico, communities like Alamogordo have voiced concerns about deepfakes, particularly as accessible AI tools enable widespread misuse. 2nd Life Media was a victim of such abuse last year, resulting in an investigation. Local discussions have highlighted risks in elections, personal privacy, and personal and business harm.

Otero County legislators have previously opposed related measures, either due to misunderstanding the risks or benefiting from the use of AI-generated content in their political campaigns, underscoring the need for legislation. The AI2A could provide tools for local law enforcement and residents to combat deceptive content, especially in a region with limited federal oversight.

As the New Mexico Legislature convenes, this bill represents a proactive step to safeguard residents while fostering responsible AI development.

For more details, see the official press release from the New Mexico Department of Justice.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...