New York’s RAISE Act: Aligning AI Regulations with California’s Framework

With the RAISE Act, New York Aligns With California on Frontier AI Laws

Amid a national debate over AI laws, New York has become the second state to impose new requirements on advanced AI models and the companies that develop them. Instead of contributing to a patchwork of state laws ranging from algorithmic discrimination to child safety that some fear may emerge in the coming months, the Responsible AI Safety and Education (RAISE) Act takes a different path: it harmonizes directly with existing rules.

Overview of the RAISE Act

The RAISE Act, which originated with New York Assemblymember Alex Bores and State Senator Andrew Gounardes, adopts the “trust but verify” paradigm set out by SB-53, the California bill that became the first U.S. frontier AI law last September. Both bills create a core set of transparency requirements, mandating developers to publish frontier AI risk frameworks and report safety incidents to state officials.

Though the bills differ in minor but meaningful ways, their overwhelming convergence matters. The worst fears of those who argued that the federal government had to step in to preempt a coming flood of state AI laws have not borne fruit—at least when it comes to frontier AI rules. Even absent a federal framework, California and New York’s harmonization means that developers are not yet facing the substantial additional compliance burden in frontier AI policy that preemption advocates anticipated.

How the Act Builds on SB-53

The RAISE Act, which will come into effect on January 1, 2027, draws heavily on the “trust but verify” framework of SB-53; in many cases, it directly copies the text of the California law. Most of the legislative findings—which frame the bill’s objectives and provide interpretive guidance about what the legislature believed and intended—are the same. RAISE borrows many of the key definitions from SB-53, including those for catastrophic risk, critical safety incident, and foundation model.

Both laws apply their strictest requirements to AI models trained on more than 10^26 FLOPS and companies with annual gross revenue exceeding $500 million in the previous year. At the heart of both bills is an identical set of transparency requirements for frontier AI development. Like SB-53, RAISE requires companies to publish their approach to safety testing, risk mitigation, incident response, and cybersecurity controls. Companies can choose their methods and standards but must adhere to whatever commitments they’ve made.

They also have to report severe harms caused by AI—primarily those involving death, bodily injury, or major economic damage—along with deceptive model behavior that materially increases catastrophic risk to government officials. Notably, the RAISE Act, like SB-53, requires even models that are deployed only internally to be covered by the frontier AI framework.

Potential for Federal Preemption

The biggest question raised by the new law is whether it will be overtaken by a federal effort to supersede state regulation of AI—whether any such federal rule would closely adhere to the provisions of SB-53 and the RAISE Act or take a different, possibly more laissez-faire approach. The previous administration announced renewed efforts to preempt state AI laws either through federal regulation or congressional action.

Although two attempts to bar state AI regulation failed in Congress last year, some members of Congress have expressed interest in reviving the effort. The White House issued an executive order in December promising federal rules and draft legislation, with parts of SB-53 and RAISE included in the president’s AI Action Plan.

Congressional action appears to be the most likely route for any or all of RAISE to be preempted. Without it, there’s likely to be little the administration can do to block the law. Even without regulatory authority, federal agencies might issue nonbinding guidance on transparency and reporting, which would not preempt state laws but could allow companies to satisfy state requirements by complying with federal standards.

The Future of State Frontier AI Laws

Even if the federal government’s pathway toward a unified framework remains unclear, states may be moving toward an initial consensus on frontier AI policy. Governors of both parties have signaled a desire to regulate AI, with transparency-focused bills introduced in Michigan and Utah. Many observers feared last year that an explosion of state legislation on AI would create a thicket of AI regulations that would stifle American AI development, but that has not yet come to pass.

California and New York face significant questions over how the “trust but verify” framework will be implemented and whether they have the capacity to enforce these laws effectively. It remains unclear what governments are meant to do with company risk reports when they receive them. Neither SB-53 nor the RAISE Act offers a framework for analyzing critical safety incidents or internal deployment reports.

Both laws require state agencies to produce expert reports explaining whether the laws should be updated. Ultimately, whether and how these new laws are enforced will determine if “trust but verify” frameworks create genuine transparency and accountability or become merely symbolic efforts to guide the frontier of AI.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...