New York’s RAISE Act: A Misguided Approach to AI Regulation

AI Safety or Innovation Freeze? – Analyzing New York’s RAISE Act

The Responsible AI Safety and Education Act (RAISE Act), recently passed by the New York State legislature, aims to address the fundamental risks posed by frontier AI models. However, the act imposes obligations solely on developers of such models, which may chill innovation without meaningfully enhancing safety.

Overview of the RAISE Act

On June 12, 2025, the New York State Legislature passed the RAISE Act, which is now awaiting approval from Governor Kathy Hochul. This legislation follows the One Big Beautiful Bill Act, which initially contemplated a ten-year moratorium on state-level AI regulation but did not adopt such a freeze. The RAISE Act represents a significant step in regulating frontier AI, applying to all models developed in New York and those accessible to its residents.

Inspired by the EU’s AI Act and California’s SB 1047 (which was vetoed due to concerns over stifled innovation), the RAISE Act aims to ensure the safety of “frontier models.” These are defined as AI models that cost over $100 million to train and exceed a specific computational threshold. The act is designed to prevent potential misuse that could result in “critical harm,” such as significant loss of life or severe economic damage.

Obligations Imposed on Developers

Under the RAISE Act, developers must:

  1. Implement safety and security protocols, including addressing risks from third-party uses beyond the developer’s control.
  2. Publish these protocols conspicuously and transmit them to the Division of Homeland Security and Emergency Services.
  3. Establish safeguards to prevent unreasonable risks of critical harm, with no clear criteria for what constitutes appropriate safeguards.

If unreasonable risks cannot be mitigated, the act prohibits the release of the model. Enforcement lies with the Attorney General, who may impose civil penalties of up to $10 million for initial violations and up to $30 million for repeat violations.

Critique of the RAISE Act

The RAISE Act is criticized for its one-sided allocation of responsibility, placing all obligations on frontier developers without extending duties to other actors in the AI value chain. This approach contrasts with the EU’s AI Act, which distributes responsibilities among multiple actor categories. By focusing solely on model developers and holding them liable for third-party misuse, the act imposes an unreasonable standard that developers cannot realistically meet.

Moreover, the concept of “critical harm” poses challenges. Frontier models are inherently general-purpose, and the risks they pose depend on their downstream applications, which vary across sectors. The act requires developers to assess and mitigate these risks at the time of release, despite their speculative nature.

Potential for Unintended Consequences

Some lawmakers in New York are advocating for the expansion of the act to include third-party audits. However, a perpetual independent audit requirement may impose rigid compliance costs without demonstrable safety improvements, as auditors face similar challenges as developers in identifying risks.

Recommendations for Balanced Regulation

While there is a clear need for regulation of frontier AI models, the RAISE Act’s current framework is flawed. A more balanced approach should include:

  • Imposing baseline obligations such as red teaming and prompt moderation to prevent unlawful activities.
  • Requiring developers to share information with downstream actors, enabling them to implement their own safeguards.
  • Extending obligations to downstream providers, including those who fine-tune, deploy, or use the models, as these actors can better assess context-specific risks.

Conclusion

The RAISE Act fails to establish a comprehensive and balanced framework for the governance of AI. By focusing solely on frontier developers and relying on vague concepts of “critical harm,” the act risks becoming a cautionary tale of how well-intentioned regulations can hinder innovation. A more effective path would involve regulating the entire AI value chain or delaying prescriptive rules until the risks associated with frontier AI models can be clearly identified and legally addressed.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...