New York’s Bold Move to Regulate AI Safety

New York State’s AI Safety Bill: The RAISE Act

On June 17, 2025, New York state lawmakers passed the RAISE Act, a significant piece of legislation aimed at preventing potential disasters stemming from the use of frontier AI models developed by companies such as OpenAI, Google, and Anthropic. This bill seeks to mitigate risks that could result in catastrophic outcomes, including the loss of life exceeding 100 individuals or damages surpassing $1 billion.

Context and Motivations

The passage of the RAISE Act marks a notable victory for the AI safety movement, which has faced challenges in recent years as the tech industry has prioritized rapid innovation over regulatory frameworks. Advocates for AI safety, including notable figures like Geoffrey Hinton and Yoshua Bengio, have championed this legislative effort. If enacted, the RAISE Act would establish the first legally mandated transparency standards for frontier AI laboratories in the United States.

Key Provisions of the RAISE Act

While the RAISE Act shares some similarities with California’s controversial AI safety legislation, SB 1047, it has been designed to address criticisms pertaining to its predecessors. According to Andrew Gounardes, the co-sponsor of the bill, the RAISE Act aims to avoid stifling innovation among startups and academic researchers, a concern often associated with SB 1047.

Key provisions of the RAISE Act include:

  • The requirement for the world’s largest AI laboratories to publish detailed safety and security reports on their frontier AI models.
  • A mandate for AI labs to report safety incidents, which may include concerning behaviors of AI models or any breaches involving theft of AI technology.
  • Empowering New York’s attorney general to impose civil penalties of up to $30 million for failures to comply with these standards.

Impact on AI Companies

The RAISE Act is specifically targeted at major corporations, regardless of their geographical base, whether in the U.S. or abroad. The legislation applies to companies that have used more than $100 million in computing resources to train their AI models. This threshold ensures that the compliance requirements are focused on the most powerful entities in the AI landscape.

The implications of this bill are profound. It is designed to create a framework where safety and accountability are prioritized, as the evolving nature of AI technology presents increasingly complex challenges. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” remarked Senator Gounardes, emphasizing the urgency of proactive measures.

Industry Reactions

Despite its intent to foster safety, the RAISE Act has faced pushback from the tech industry. Critics argue that such regulations could hinder innovation and competitiveness in an industry where rapid advancements are crucial. Anjney Midha, a general partner at Andreessen Horowitz, labeled the RAISE Act as detrimental, suggesting that it could impede the U.S. in the global AI race.

Interestingly, the legislation has been designed to avoid imposing overly burdensome requirements on smaller companies. Nonetheless, the concerns remain that the law may deter AI developers from releasing their most advanced models in New York, a scenario that has already played out in Europe due to stringent regulations.

Future Prospects

The RAISE Act is currently awaiting the decision of New York Governor Kathy Hochul, who has the option to sign it into law, propose amendments, or veto it entirely. If enacted, it could set a precedent for AI regulation in the U.S., potentially influencing future legislation at both state and federal levels.

In conclusion, the RAISE Act represents a pivotal moment in the ongoing dialogue surrounding AI safety and regulation. As technology continues to advance, the balance between fostering innovation and ensuring public safety remains a critical challenge for lawmakers and industry leaders alike.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...