New York’s Bold Move to Regulate AI Safety

New York State’s AI Safety Bill: The RAISE Act

On June 17, 2025, New York state lawmakers passed the RAISE Act, a significant piece of legislation aimed at preventing potential disasters stemming from the use of frontier AI models developed by companies such as OpenAI, Google, and Anthropic. This bill seeks to mitigate risks that could result in catastrophic outcomes, including the loss of life exceeding 100 individuals or damages surpassing $1 billion.

Context and Motivations

The passage of the RAISE Act marks a notable victory for the AI safety movement, which has faced challenges in recent years as the tech industry has prioritized rapid innovation over regulatory frameworks. Advocates for AI safety, including notable figures like Geoffrey Hinton and Yoshua Bengio, have championed this legislative effort. If enacted, the RAISE Act would establish the first legally mandated transparency standards for frontier AI laboratories in the United States.

Key Provisions of the RAISE Act

While the RAISE Act shares some similarities with California’s controversial AI safety legislation, SB 1047, it has been designed to address criticisms pertaining to its predecessors. According to Andrew Gounardes, the co-sponsor of the bill, the RAISE Act aims to avoid stifling innovation among startups and academic researchers, a concern often associated with SB 1047.

Key provisions of the RAISE Act include:

  • The requirement for the world’s largest AI laboratories to publish detailed safety and security reports on their frontier AI models.
  • A mandate for AI labs to report safety incidents, which may include concerning behaviors of AI models or any breaches involving theft of AI technology.
  • Empowering New York’s attorney general to impose civil penalties of up to $30 million for failures to comply with these standards.

Impact on AI Companies

The RAISE Act is specifically targeted at major corporations, regardless of their geographical base, whether in the U.S. or abroad. The legislation applies to companies that have used more than $100 million in computing resources to train their AI models. This threshold ensures that the compliance requirements are focused on the most powerful entities in the AI landscape.

The implications of this bill are profound. It is designed to create a framework where safety and accountability are prioritized, as the evolving nature of AI technology presents increasingly complex challenges. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” remarked Senator Gounardes, emphasizing the urgency of proactive measures.

Industry Reactions

Despite its intent to foster safety, the RAISE Act has faced pushback from the tech industry. Critics argue that such regulations could hinder innovation and competitiveness in an industry where rapid advancements are crucial. Anjney Midha, a general partner at Andreessen Horowitz, labeled the RAISE Act as detrimental, suggesting that it could impede the U.S. in the global AI race.

Interestingly, the legislation has been designed to avoid imposing overly burdensome requirements on smaller companies. Nonetheless, the concerns remain that the law may deter AI developers from releasing their most advanced models in New York, a scenario that has already played out in Europe due to stringent regulations.

Future Prospects

The RAISE Act is currently awaiting the decision of New York Governor Kathy Hochul, who has the option to sign it into law, propose amendments, or veto it entirely. If enacted, it could set a precedent for AI regulation in the U.S., potentially influencing future legislation at both state and federal levels.

In conclusion, the RAISE Act represents a pivotal moment in the ongoing dialogue surrounding AI safety and regulation. As technology continues to advance, the balance between fostering innovation and ensuring public safety remains a critical challenge for lawmakers and industry leaders alike.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...