New York’s Bold Move to Regulate AI Safety

New York State’s AI Safety Bill: The RAISE Act

On June 17, 2025, New York state lawmakers passed the RAISE Act, a significant piece of legislation aimed at preventing potential disasters stemming from the use of frontier AI models developed by companies such as OpenAI, Google, and Anthropic. This bill seeks to mitigate risks that could result in catastrophic outcomes, including the loss of life exceeding 100 individuals or damages surpassing $1 billion.

Context and Motivations

The passage of the RAISE Act marks a notable victory for the AI safety movement, which has faced challenges in recent years as the tech industry has prioritized rapid innovation over regulatory frameworks. Advocates for AI safety, including notable figures like Geoffrey Hinton and Yoshua Bengio, have championed this legislative effort. If enacted, the RAISE Act would establish the first legally mandated transparency standards for frontier AI laboratories in the United States.

Key Provisions of the RAISE Act

While the RAISE Act shares some similarities with California’s controversial AI safety legislation, SB 1047, it has been designed to address criticisms pertaining to its predecessors. According to Andrew Gounardes, the co-sponsor of the bill, the RAISE Act aims to avoid stifling innovation among startups and academic researchers, a concern often associated with SB 1047.

Key provisions of the RAISE Act include:

  • The requirement for the world’s largest AI laboratories to publish detailed safety and security reports on their frontier AI models.
  • A mandate for AI labs to report safety incidents, which may include concerning behaviors of AI models or any breaches involving theft of AI technology.
  • Empowering New York’s attorney general to impose civil penalties of up to $30 million for failures to comply with these standards.

Impact on AI Companies

The RAISE Act is specifically targeted at major corporations, regardless of their geographical base, whether in the U.S. or abroad. The legislation applies to companies that have used more than $100 million in computing resources to train their AI models. This threshold ensures that the compliance requirements are focused on the most powerful entities in the AI landscape.

The implications of this bill are profound. It is designed to create a framework where safety and accountability are prioritized, as the evolving nature of AI technology presents increasingly complex challenges. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” remarked Senator Gounardes, emphasizing the urgency of proactive measures.

Industry Reactions

Despite its intent to foster safety, the RAISE Act has faced pushback from the tech industry. Critics argue that such regulations could hinder innovation and competitiveness in an industry where rapid advancements are crucial. Anjney Midha, a general partner at Andreessen Horowitz, labeled the RAISE Act as detrimental, suggesting that it could impede the U.S. in the global AI race.

Interestingly, the legislation has been designed to avoid imposing overly burdensome requirements on smaller companies. Nonetheless, the concerns remain that the law may deter AI developers from releasing their most advanced models in New York, a scenario that has already played out in Europe due to stringent regulations.

Future Prospects

The RAISE Act is currently awaiting the decision of New York Governor Kathy Hochul, who has the option to sign it into law, propose amendments, or veto it entirely. If enacted, it could set a precedent for AI regulation in the U.S., potentially influencing future legislation at both state and federal levels.

In conclusion, the RAISE Act represents a pivotal moment in the ongoing dialogue surrounding AI safety and regulation. As technology continues to advance, the balance between fostering innovation and ensuring public safety remains a critical challenge for lawmakers and industry leaders alike.

More Insights

Harnessing Generative AI for Enhanced Risk and Compliance in 2025

In 2025, the demand for Generative AI in risk and compliance certification is surging as organizations face complex regulatory landscapes and increasing threats. This certification equips...

Building Sustainable Generative AI: Mitigating Carbon Emissions

Generative AI is revolutionizing industries, but it comes with a significant environmental cost due to carbon emissions from extensive compute resources. As the demand for large-scale models grows...

AI Regulation: Balancing Innovation and Oversight

Experts discuss the implications of the recently passed H.R. 1, which would pause state and local regulations on artificial intelligence for ten years. The article examines the benefits and drawbacks...

AI Governance in India: Shaping the Future of Technology

This article examines the evolving landscape of AI governance in India, highlighting both the initiatives aimed at promoting AI adoption and the regulatory frameworks being developed to manage...

AI’s Shadow: Exposing and Addressing Harms Against Women and Girls

AI's rapid advancement presents risks, especially for vulnerable populations targeted by cyber-harassment, hate speech, and impersonation. AI systems can amplify biases and be exploited to harm...

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

AI as a Strategic Partner in Governance

The UAE has announced that a National Artificial Intelligence System will become a non-voting member of all federal and government company boards, marking a significant shift in governance. This...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...