New York’s AI Regulation: Key Developments and Impacts

New York Laws “RAISE” the Bar in Addressing AI Safety: The RAISE Act and AI Companion Models

The state of New York was a leader in artificial intelligence (AI) regulation in 2025. Among its significant actions, the state legislature enacted an omnibus budget law implementing safeguards for AI companions, effective November 5, 2025. Furthermore, the Governor signed an amended version of the Responsible Artificial Intelligence Safety and Education (RAISE) Act on December 19, 2025. New York now joins states like Colorado, California, and Utah in a growing framework of comprehensive and targeted AI legislation.

As states push forward with their regulatory efforts, federal activities have raised questions regarding pre-emption and the future of state-level AI legislation. For instance, an executive order (EO) issued in December directed the creation of an AI Litigation Task Force within the DOJ aimed at challenging state laws. This EO has sparked debates about federal authority over state regulations, introducing a period of legal uncertainty that could influence states’ regulatory timelines.

Artificial Intelligence Companion Models

The AI Companion Model law mandates operators and providers of AI companions to implement safety measures addressing users’ expressions of suicidal ideation or self-harm. It also requires regular notifications to users that they are not interacting with a human being.

Scope

This law applies to all operators of AI companions with users located in New York. AI companions are defined as systems using AI, generative AI, and/or emotional recognition algorithms designed to foster human-like relationships with users. Key features include retaining user interaction history, asking unsolicited emotional questions, and maintaining ongoing dialogues on personal matters.

Key Requirements

Operators must provide clear notifications to users, either verbally or in writing, about the non-human nature of the interaction. This notification should occur at least every three hours during ongoing interactions. Moreover, operators are required to implement protocols to identify and address suicidal ideation or self-harm, referring users to appropriate crisis services when necessary. Violations of these provisions may result in civil penalties of up to $15,000 per day, enforced by the state attorney general.

Next Steps

Operators covered by this law need to establish how to effectively notify users and ensure the clarity and frequency of these notifications. They must also develop methods to detect and address expressions of suicidal ideation or self-harm.

New York’s approach to AI companions is novel, following California’s similar law effective January 1, 2026. California’s legislation shares many requirements with New York’s, particularly regarding notifications and protections for minors.

Responsible Artificial Intelligence Safety and Education (RAISE) Act

Alongside the AI Companion Model law, New York tackled broader AI safety issues through the RAISE Act, effective January 1, 2027. This act mandates developers of frontier models to implement transparency and disclosure requirements, including making safety protocols available to relevant authorities.

Key Requirements

The RAISE Act applies to frontier models developed, deployed, or operated in New York. These models are defined as large-scale AI systems requiring extensive computational resources and significant financial investment. Key requirements for developers include:

  • Conducting annual safety reviews and independent third-party audits.
  • Publishing safety protocol information while allowing some redactions.
  • Reporting safety incidents within 72 hours.
  • Determining whether models could cause “critical harm.”
  • Creating detailed safety and security protocols to prevent critical harms.

The current version of the RAISE Act prohibits the deployment of models posing an unreasonable risk of critical harm. Violations may result in civil penalties ranging from $1,000,000 for the first offense to $3,000,000 for subsequent violations.

Conclusion

New York has enacted several AI-related bills in 2025, addressing various aspects such as personalized algorithmic pricing and digital replicas. With over a dozen states introducing AI-specific laws, the landscape is rapidly evolving. Despite the uncertainty created by recent federal actions, further state-level proposals are anticipated in 2026.

Stay tuned for forthcoming recommendations for compliance with new AI laws set to take effect in 2026.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...