New York’s RAISE Act: A Blueprint for AI Regulation
New York has emerged as a significant player in the realm of artificial intelligence regulation with the signing of the Responsible AI Safety and Education (RAISE) Act by Governor Kathy Hochul in December 2025. This legislation sets a disclosure-driven framework that aims to govern the most powerful AI models in the market.
The RAISE Act parallels California’s SB-53, establishing a framework for state-level AI governance that may become the standard across the nation. Both pieces of legislation reflect a growing trend towards targeted regulation at the state level, despite the fragmentation of current AI policies and the contentious nature of federal-level regulations.
Political Context and Legal Landscape
As state AI policy remains fragmented, federal intervention is marked by political contention. An executive order from President Donald Trump instructs the US Department of Justice to scrutinize state AI laws that may infringe on interstate commerce or First Amendment rights. This includes disclosure requirements for AI companies, which are explicitly highlighted for potential legal challenges.
Despite this, New York’s progress indicates a belief that disclosure-based regulation can withstand legal scrutiny. The passage of the RAISE Act has inspired other states, like Utah, to consider similar legislation.
Comparative Analysis: RAISE Act vs. California’s SB-53
The differences between New York’s RAISE Act and California’s SB-53 are modest but notable. For example, California provides a 15-day reporting window for critical safety incidents, while New York mandates a 72-hour notification. Additionally, California caps civil penalties at $1 million, whereas New York allows for penalties of up to $1 million for a first violation and $3 million for subsequent offenses. California’s legislation also includes explicit whistleblower protections, absent in New York’s law.
As more states seek to regulate AI, it is expected that many will adopt these frameworks rather than create entirely new regulations, leading to a standardized approach shaped by early movers like New York and California.
Narrow Focus with Significant Implications
The RAISE Act is focused on “large frontier developers”, defined as companies with over $500 million in annual gross revenue that train frontier models exceeding 10^26 operations. This narrow scope ensures that the law targets only a handful of major players, such as OpenAI and Meta Platforms Inc.
However, the law is strict in addressing catastrophic risks, which are defined as foreseeable risks that could result in significant harm or damage, such as the potential creation of hazardous weaponry. The law is not aimed at regulating minor misuses but at preventing severe consequences.
Transparency as a Regulatory Mechanism
Unlike traditional regulations that might prescribe specific technical safeguards, the RAISE Act emphasizes transparency. Developers are required to create and publicly disclose a “frontier AI framework” that outlines how they assess and mitigate catastrophic risks. This framework must be updated annually and revised whenever a model is materially modified.
When changes occur, developers must publish the revised framework and justify modifications within 30 days. Additionally, before deploying new or significantly modified models, developers must release a transparency report detailing the model’s intended uses and restrictions.
Incident Reporting Requirements
The RAISE Act mandates stringent incident reporting for frontier model developers. They must notify state regulators of critical safety incidents within 72 hours, or 24 hours if the incident poses an imminent risk of serious injury or death. This places a high operational burden on companies to enhance their internal detection and escalation processes.
Implications for AI Governance Teams
As developers implement their frontier AI frameworks, contractual agreements are likely to evolve, particularly regarding usage restrictions and incident notification requirements. Organizations that deploy frontier models will increasingly be expected to identify and escalate risk signals promptly.
The broader lesson from the RAISE Act is not limited to New York but serves as a harbinger for future regulatory trajectories. Companies that invest in robust governance processes now will be better positioned as more states adopt similar regulatory frameworks in the future.