Blueprint for AI Regulation: New York’s RAISE Act

New York’s RAISE Act: A Blueprint for AI Regulation

New York has emerged as a significant player in the realm of artificial intelligence regulation with the signing of the Responsible AI Safety and Education (RAISE) Act by Governor Kathy Hochul in December 2025. This legislation sets a disclosure-driven framework that aims to govern the most powerful AI models in the market.

The RAISE Act parallels California’s SB-53, establishing a framework for state-level AI governance that may become the standard across the nation. Both pieces of legislation reflect a growing trend towards targeted regulation at the state level, despite the fragmentation of current AI policies and the contentious nature of federal-level regulations.

Political Context and Legal Landscape

As state AI policy remains fragmented, federal intervention is marked by political contention. An executive order from President Donald Trump instructs the US Department of Justice to scrutinize state AI laws that may infringe on interstate commerce or First Amendment rights. This includes disclosure requirements for AI companies, which are explicitly highlighted for potential legal challenges.

Despite this, New York’s progress indicates a belief that disclosure-based regulation can withstand legal scrutiny. The passage of the RAISE Act has inspired other states, like Utah, to consider similar legislation.

Comparative Analysis: RAISE Act vs. California’s SB-53

The differences between New York’s RAISE Act and California’s SB-53 are modest but notable. For example, California provides a 15-day reporting window for critical safety incidents, while New York mandates a 72-hour notification. Additionally, California caps civil penalties at $1 million, whereas New York allows for penalties of up to $1 million for a first violation and $3 million for subsequent offenses. California’s legislation also includes explicit whistleblower protections, absent in New York’s law.

As more states seek to regulate AI, it is expected that many will adopt these frameworks rather than create entirely new regulations, leading to a standardized approach shaped by early movers like New York and California.

Narrow Focus with Significant Implications

The RAISE Act is focused on “large frontier developers”, defined as companies with over $500 million in annual gross revenue that train frontier models exceeding 10^26 operations. This narrow scope ensures that the law targets only a handful of major players, such as OpenAI and Meta Platforms Inc.

However, the law is strict in addressing catastrophic risks, which are defined as foreseeable risks that could result in significant harm or damage, such as the potential creation of hazardous weaponry. The law is not aimed at regulating minor misuses but at preventing severe consequences.

Transparency as a Regulatory Mechanism

Unlike traditional regulations that might prescribe specific technical safeguards, the RAISE Act emphasizes transparency. Developers are required to create and publicly disclose a “frontier AI framework” that outlines how they assess and mitigate catastrophic risks. This framework must be updated annually and revised whenever a model is materially modified.

When changes occur, developers must publish the revised framework and justify modifications within 30 days. Additionally, before deploying new or significantly modified models, developers must release a transparency report detailing the model’s intended uses and restrictions.

Incident Reporting Requirements

The RAISE Act mandates stringent incident reporting for frontier model developers. They must notify state regulators of critical safety incidents within 72 hours, or 24 hours if the incident poses an imminent risk of serious injury or death. This places a high operational burden on companies to enhance their internal detection and escalation processes.

Implications for AI Governance Teams

As developers implement their frontier AI frameworks, contractual agreements are likely to evolve, particularly regarding usage restrictions and incident notification requirements. Organizations that deploy frontier models will increasingly be expected to identify and escalate risk signals promptly.

The broader lesson from the RAISE Act is not limited to New York but serves as a harbinger for future regulatory trajectories. Companies that invest in robust governance processes now will be better positioned as more states adopt similar regulatory frameworks in the future.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...