NY Enacts RAISE Act Amid Federal AI Security Push
On February 3, the Information Technology Industry Council (ITI) convened industry leaders, senior White House officials, and congressional lawmakers in Washington, DC for its technology and policy summit. During the event, White House National Cyber Director Sean Cairncross previewed an upcoming AI security policy framework aimed at embedding cybersecurity protections into US-led AI technology stacks.
Director Cairncross indicated that the AI security framework is being developed in coordination with the Office of Science and Technology Policy, though no timeline for release was announced. His remarks highlighted that the framework aims to ensure that security is not viewed as a friction point for innovation, but rather built into the system, reiterating the administration’s pro-industry stance on AI.
New York’s RAISE Act
On December 19, New York became the second state to enact a targeted regulatory framework for large frontier AI developers, following California’s Transparency in Frontier Artificial Intelligence Act (SB 53). Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law, imposing new transparency, safety, and incident-reporting obligations on developers of frontier AI models, along with civil penalties enforceable by the New York attorney general.
The signing of the RAISE Act aligns with the December 11 White House Executive Order that supports a “minimally burdensome” federal AI regulatory regime to preempt a patchwork of state AI laws. This Executive Order called on the Department of Justice (DOJ) to challenge state laws that conflict with federal policy.
Key Elements of the RAISE Act
The RAISE Act includes safety protocols and incident reporting for frontier models, imposing fines up to $30 million, enforceable by the attorney general. Developers are required to develop, maintain, and publicly disclose safety and security protocols addressing catastrophic risks, defined as incidents involving at least 100 deaths or serious injuries or $1 billion in damages.
Additionally, New York’s incident reporting regime is notably stringent, requiring developers to report qualifying safety incidents within 72 hours of discovery. In comparison, California allows a 15-day window for reporting.
Federal Preemption and State Consensus
Governor Hochul’s signing of the RAISE Act comes amid ongoing discussions around federal preemption of state AI laws. The Executive Order highlights Colorado’s Artificial Intelligence Act as an example of legislation that departs from the administration’s preferred deregulatory approach to AI oversight. Unlike New York’s RAISE Act and California’s SB 53, Colorado’s law is more narrowly focused on consumer protection.
Despite the establishment of the DOJ AI Litigation Task Force, there has not yet been any public indication of enforcement actions against specific state AI statutes. The RAISE Act is set to go into effect on January 1, 2027.
Conclusion
As states like California and New York align around a transparency-driven framework for frontier AI regulation, there are signs of an emerging consensus rather than a chaotic patchwork of regulations. With proposals surfacing in other states, the landscape of AI regulation in the US is evolving rapidly, reflecting both innovation and the need for robust safety measures.