From Transparency to Oversight: New York’s RAISE Act Raises the Bar for Frontier AI Developers
On December 19, 2025, New York Governor Kathy Hochul signed Assembly Bill A6453-A, known as the Responsible AI Safety and Education Act (RAISE Act), into law. This statute establishes a targeted framework governing the development and deployment of advanced “frontier” artificial intelligence models, focusing on AI safety, transparency, and the prevention and reporting of incidents involving catastrophic harm.
The RAISE Act does not feature a staged phase-in of obligations. Instead, it becomes fully effective on July 1, 2027, with certain requirements operating on an ongoing or annual basis once in force.
Key Obligations Imposed by the RAISE Act
The RAISE Act imposes affirmative safety, documentation, audit, and incident-reporting obligations on large developers of frontier AI models, enforced by the New York Attorney General. Notably, it does not regulate AI deployers or ordinary users as a separate class.
Governor Hochul indicated that the legislation aligns with California’s SB 53, the Transparency in Frontier Artificial Intelligence Act (TAFAIA). The RAISE Act narrows its focus to the most capable frontier models and concentrates compliance obligations on large developers.
Impact on Developers
The RAISE Act is significant for three primary reasons:
- It requires large frontier developers to implement, document, and audit safety and security protocols as a condition of deployment.
- It establishes a short, mandatory safety-incident reporting clock of 72 hours, significantly stricter than California’s default 15-day reporting period.
- It signals a convergence among large states regarding frontier model governance, despite differing compliance mechanics.
Organizations preparing for California’s SB 53 should note that the RAISE Act raises documentation discipline, audit readiness, and incident-response escalation standards, even though it applies to a narrower set of developers.
Applicability of the RAISE Act
The RAISE Act applies to large developers of frontier AI models developed, deployed, or operating in part in New York State. A “frontier model” is defined by extremely large-scale training or knowledge distillation. Obligations are tied to developers meeting specified compute-cost thresholds.
Unlike California’s SB 53, which is triggered primarily by a high technical capability threshold, the RAISE Act combines high technical benchmarks with dollar-denominated training spend thresholds, limiting its reach to the largest frontier AI developers.
Mandatory Compliance Measures
Covered developers must:
- Adopt, implement, and maintain a written AI safety and security protocol addressing risks associated with frontier AI models.
- Make a redacted version of the protocol public while retaining an unredacted version available for state review.
- Document testing procedures, results, and safeguards used to mitigate critical harm risks.
- Undergo an annual independent third-party audit of compliance with safety and security requirements.
Developers are also required to report qualifying safety incidents to the State within 72 hours of learning of such incidents.
Enforcement and Penalties
The RAISE Act prohibits materially false or misleading statements in required disclosures. Enforcement rests with the New York Attorney General, who may seek civil penalties up to $1 million for first violations and up to $3 million for subsequent violations. Whistleblowers can seek judicial relief for retaliation under the statute’s employment-protection provisions.
Comparison with California’s SB 53
Both statutes focus on preventing catastrophic harm but differ in the prescriptiveness of their obligations. The RAISE Act imposes mandatory safety protocols, annual audits, and a compressed safety-incident reporting timeline, while California’s SB 53 emphasizes standardized transparency reports and longer reporting periods.
Furthermore, the RAISE Act will establish a dedicated oversight function within the New York State Department of Financial Services, creating additional compliance infrastructure for developers.
Federal Context and Implications
The RAISE Act operates against the backdrop of a December 11, 2025, White House Executive Order directing federal agencies to evaluate state AI laws and their potential conflicts with federal objectives. This heightens the possibility of federal review or litigation regarding state regulations like the RAISE Act.
Conclusion
For organizations developing or modifying highly capable AI models, the RAISE Act represents a significant escalation beyond California’s SB 53, imposing additional obligations particularly around audit readiness and incident reporting. Companies should harmonize internal governance to accommodate both state laws while monitoring ongoing federal developments.
While the RAISE Act does not directly regulate AI deployers, they should prepare for indirect compliance impacts and ensure that internal processes align with the heightened obligations of their developers.