New York’s Bold Move to Regulate AI Giants’ Safety Protocols

New York’s Proposed AI Regulation: The Responsible AI Safety and Education (RAISE) Act

New York is poised to introduce one of the first comprehensive laws regulating advanced AI models in the United States with the Responsible AI Safety and Education (RAISE) Act. This significant legislation has already passed the state Senate and awaits the final decision from Governor Kathy Hochul.

Key Provisions of the RAISE Act

If enacted, the RAISE Act will impose strict safety measures on major AI developers, including OpenAI, Google, and Anthropic. The law mandates that these companies publish detailed safety protocols and conduct risk assessments before releasing advanced AI models to the public. Additionally, developers will be required to report serious incidents, such as model theft or instances of dangerous AI behavior. Violations of these regulations could result in civil penalties reaching up to $30 million.

Targeting Large Corporations

Senator Andrew Gounardes, one of the bill’s sponsors, emphasized that the law is specifically designed to target large corporations that dedicate more than $100 million to model training. Notably, startups and academic institutions will be exempt from these regulations. Gounardes stated, “The window to put guardrails in place is closing fast,” underscoring the urgency of the legislation.

Avoiding Previous Pitfalls

The RAISE Act was crafted to learn from previous legislative efforts, particularly California’s SB 1047 bill, which faced criticism for being overly restrictive. Unlike its predecessor, the RAISE Act does not mandate the implementation of kill switches or hold companies liable for models that undergo modifications.

Industry Response and Concerns

The proposal has drawn criticism from figures in the tech industry, with Anjney Midha, a partner at Andreessen Horowitz, labeling the bill as “dumb.” He cautioned that it could place the U.S. at a competitive disadvantage in the global tech landscape. Jack Clark, co-founder of Anthropic, also expressed concerns about the potential impact on smaller companies, indicating that the legislation might unintentionally stifle innovation.

Despite the pushback, Gounardes defended the bill, asserting that it is intentionally focused on the largest players in the market. Major tech firms such as OpenAI, Google, and Meta have not publicly commented on the bill, but some critics speculate that companies may opt not to offer their models in New York, which could significantly impact the state’s tech ecosystem. Gounardes believes this scenario is unlikely, as New York represents the third-largest economy in the U.S., making it economically unfeasible for companies to withdraw.

Potential Federal Legislation

In a related development, the U.S. House of Representatives recently passed a legislative package that could impose a ten-year ban on state-level AI regulations. If this “One Big, Beautiful” bill passes the Senate, it could effectively block laws like the RAISE Act in the future. Supporters of the moratorium, which includes major tech companies and free-market think tanks, argue that unified federal regulations are necessary to streamline the development of AI technologies. However, critics warn that such centralization could undermine consumer protections and favor industry interests over public safety.

Conclusion

The RAISE Act represents a significant step towards establishing regulatory frameworks for advanced AI technologies in New York. By mandating safety protocols and risk assessments, the legislation aims to balance innovation with public safety in an increasingly complex technological landscape. As the bill awaits the governor’s signature, its implications will likely resonate across the tech industry and set a precedent for AI regulation nationwide.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...