AI Safety or Innovation Freeze? – Analyzing New York’s RAISE Act
The Responsible AI Safety and Education Act (RAISE Act), recently passed by the New York State legislature, aims to address the fundamental risks posed by frontier AI models. However, the act imposes obligations solely on developers of such models, which may chill innovation without meaningfully enhancing safety.
Overview of the RAISE Act
On June 12, 2025, the New York State Legislature passed the RAISE Act, which is now awaiting approval from Governor Kathy Hochul. This legislation follows the One Big Beautiful Bill Act, which initially contemplated a ten-year moratorium on state-level AI regulation but did not adopt such a freeze. The RAISE Act represents a significant step in regulating frontier AI, applying to all models developed in New York and those accessible to its residents.
Inspired by the EU’s AI Act and California’s SB 1047 (which was vetoed due to concerns over stifled innovation), the RAISE Act aims to ensure the safety of “frontier models.” These are defined as AI models that cost over $100 million to train and exceed a specific computational threshold. The act is designed to prevent potential misuse that could result in “critical harm,” such as significant loss of life or severe economic damage.
Obligations Imposed on Developers
Under the RAISE Act, developers must:
- Implement safety and security protocols, including addressing risks from third-party uses beyond the developer’s control.
- Publish these protocols conspicuously and transmit them to the Division of Homeland Security and Emergency Services.
- Establish safeguards to prevent unreasonable risks of critical harm, with no clear criteria for what constitutes appropriate safeguards.
If unreasonable risks cannot be mitigated, the act prohibits the release of the model. Enforcement lies with the Attorney General, who may impose civil penalties of up to $10 million for initial violations and up to $30 million for repeat violations.
Critique of the RAISE Act
The RAISE Act is criticized for its one-sided allocation of responsibility, placing all obligations on frontier developers without extending duties to other actors in the AI value chain. This approach contrasts with the EU’s AI Act, which distributes responsibilities among multiple actor categories. By focusing solely on model developers and holding them liable for third-party misuse, the act imposes an unreasonable standard that developers cannot realistically meet.
Moreover, the concept of “critical harm” poses challenges. Frontier models are inherently general-purpose, and the risks they pose depend on their downstream applications, which vary across sectors. The act requires developers to assess and mitigate these risks at the time of release, despite their speculative nature.
Potential for Unintended Consequences
Some lawmakers in New York are advocating for the expansion of the act to include third-party audits. However, a perpetual independent audit requirement may impose rigid compliance costs without demonstrable safety improvements, as auditors face similar challenges as developers in identifying risks.
Recommendations for Balanced Regulation
While there is a clear need for regulation of frontier AI models, the RAISE Act’s current framework is flawed. A more balanced approach should include:
- Imposing baseline obligations such as red teaming and prompt moderation to prevent unlawful activities.
- Requiring developers to share information with downstream actors, enabling them to implement their own safeguards.
- Extending obligations to downstream providers, including those who fine-tune, deploy, or use the models, as these actors can better assess context-specific risks.
Conclusion
The RAISE Act fails to establish a comprehensive and balanced framework for the governance of AI. By focusing solely on frontier developers and relying on vague concepts of “critical harm,” the act risks becoming a cautionary tale of how well-intentioned regulations can hinder innovation. A more effective path would involve regulating the entire AI value chain or delaying prescriptive rules until the risks associated with frontier AI models can be clearly identified and legally addressed.