New York’s Bold Move to Regulate AI Safety

New York State’s AI Safety Bill: The RAISE Act

On June 17, 2025, New York state lawmakers passed the RAISE Act, a significant piece of legislation aimed at preventing potential disasters stemming from the use of frontier AI models developed by companies such as OpenAI, Google, and Anthropic. This bill seeks to mitigate risks that could result in catastrophic outcomes, including the loss of life exceeding 100 individuals or damages surpassing $1 billion.

Context and Motivations

The passage of the RAISE Act marks a notable victory for the AI safety movement, which has faced challenges in recent years as the tech industry has prioritized rapid innovation over regulatory frameworks. Advocates for AI safety, including notable figures like Geoffrey Hinton and Yoshua Bengio, have championed this legislative effort. If enacted, the RAISE Act would establish the first legally mandated transparency standards for frontier AI laboratories in the United States.

Key Provisions of the RAISE Act

While the RAISE Act shares some similarities with California’s controversial AI safety legislation, SB 1047, it has been designed to address criticisms pertaining to its predecessors. According to Andrew Gounardes, the co-sponsor of the bill, the RAISE Act aims to avoid stifling innovation among startups and academic researchers, a concern often associated with SB 1047.

Key provisions of the RAISE Act include:

  • The requirement for the world’s largest AI laboratories to publish detailed safety and security reports on their frontier AI models.
  • A mandate for AI labs to report safety incidents, which may include concerning behaviors of AI models or any breaches involving theft of AI technology.
  • Empowering New York’s attorney general to impose civil penalties of up to $30 million for failures to comply with these standards.

Impact on AI Companies

The RAISE Act is specifically targeted at major corporations, regardless of their geographical base, whether in the U.S. or abroad. The legislation applies to companies that have used more than $100 million in computing resources to train their AI models. This threshold ensures that the compliance requirements are focused on the most powerful entities in the AI landscape.

The implications of this bill are profound. It is designed to create a framework where safety and accountability are prioritized, as the evolving nature of AI technology presents increasingly complex challenges. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” remarked Senator Gounardes, emphasizing the urgency of proactive measures.

Industry Reactions

Despite its intent to foster safety, the RAISE Act has faced pushback from the tech industry. Critics argue that such regulations could hinder innovation and competitiveness in an industry where rapid advancements are crucial. Anjney Midha, a general partner at Andreessen Horowitz, labeled the RAISE Act as detrimental, suggesting that it could impede the U.S. in the global AI race.

Interestingly, the legislation has been designed to avoid imposing overly burdensome requirements on smaller companies. Nonetheless, the concerns remain that the law may deter AI developers from releasing their most advanced models in New York, a scenario that has already played out in Europe due to stringent regulations.

Future Prospects

The RAISE Act is currently awaiting the decision of New York Governor Kathy Hochul, who has the option to sign it into law, propose amendments, or veto it entirely. If enacted, it could set a precedent for AI regulation in the U.S., potentially influencing future legislation at both state and federal levels.

In conclusion, the RAISE Act represents a pivotal moment in the ongoing dialogue surrounding AI safety and regulation. As technology continues to advance, the balance between fostering innovation and ensuring public safety remains a critical challenge for lawmakers and industry leaders alike.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...