New York’s RAISE Act: Advancing AI Regulation and Safety

NY Enacts RAISE Act Amid Federal AI Security Push

On February 3, the Information Technology Industry Council (ITI) convened industry leaders, senior White House officials, and congressional lawmakers in Washington, DC for its technology and policy summit. During the event, White House National Cyber Director Sean Cairncross previewed an upcoming AI security policy framework aimed at embedding cybersecurity protections into US-led AI technology stacks.

Director Cairncross indicated that the AI security framework is being developed in coordination with the Office of Science and Technology Policy, though no timeline for release was announced. His remarks highlighted that the framework aims to ensure that security is not viewed as a friction point for innovation, but rather built into the system, reiterating the administration’s pro-industry stance on AI.

New York’s RAISE Act

On December 19, New York became the second state to enact a targeted regulatory framework for large frontier AI developers, following California’s Transparency in Frontier Artificial Intelligence Act (SB 53). Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law, imposing new transparency, safety, and incident-reporting obligations on developers of frontier AI models, along with civil penalties enforceable by the New York attorney general.

The signing of the RAISE Act aligns with the December 11 White House Executive Order that supports a “minimally burdensome” federal AI regulatory regime to preempt a patchwork of state AI laws. This Executive Order called on the Department of Justice (DOJ) to challenge state laws that conflict with federal policy.

Key Elements of the RAISE Act

The RAISE Act includes safety protocols and incident reporting for frontier models, imposing fines up to $30 million, enforceable by the attorney general. Developers are required to develop, maintain, and publicly disclose safety and security protocols addressing catastrophic risks, defined as incidents involving at least 100 deaths or serious injuries or $1 billion in damages.

Additionally, New York’s incident reporting regime is notably stringent, requiring developers to report qualifying safety incidents within 72 hours of discovery. In comparison, California allows a 15-day window for reporting.

Federal Preemption and State Consensus

Governor Hochul’s signing of the RAISE Act comes amid ongoing discussions around federal preemption of state AI laws. The Executive Order highlights Colorado’s Artificial Intelligence Act as an example of legislation that departs from the administration’s preferred deregulatory approach to AI oversight. Unlike New York’s RAISE Act and California’s SB 53, Colorado’s law is more narrowly focused on consumer protection.

Despite the establishment of the DOJ AI Litigation Task Force, there has not yet been any public indication of enforcement actions against specific state AI statutes. The RAISE Act is set to go into effect on January 1, 2027.

Conclusion

As states like California and New York align around a transparency-driven framework for frontier AI regulation, there are signs of an emerging consensus rather than a chaotic patchwork of regulations. With proposals surfacing in other states, the landscape of AI regulation in the US is evolving rapidly, reflecting both innovation and the need for robust safety measures.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...