New York’s Landmark AI Regulation: The RAISE Act

New York Advances Frontier AI Bill

New York has taken a significant step toward becoming America’s first state to establish legally mandated transparency standards for frontier artificial intelligence systems. The Responsible AI Safety and Education Act (RAISE Act) aims to prevent AI-fueled disasters while balancing innovation concerns that doomed similar efforts in other states. The bill passed both chambers of the New York State Legislature in June 2025 and is now headed for New York Governor Kathy Hochul’s desk, where she could either sign it into law, send it back for amendments, or veto it altogether.

Background & Key Provisions

The RAISE Act emerged from lessons learned from California’s failed SB 1047, which was ultimately vetoed by Governor Gavin Newsom in September 2024. The RAISE Act targets only the most powerful AI systems, applying specifically to companies whose AI models meet both criteria:

  • Training Cost Threshold: AI models were trained using more than $100 million in computing resources.
  • Geographic Reach: Models are being made available to New York residents.

This narrow scope deliberately excludes smaller companies, startups, and academic researchers — addressing key criticisms of California’s SB 1047.

Core Requirements

The legislation establishes four primary obligations for covered companies:

  1. Safety and Security Protocols: It requires the largest AI companies to publish safety and security protocols and risk evaluations. These protocols cover severe risks, such as assisting in the creation of biological weapons or carrying out automated criminal activity.
  2. Incident Reporting: The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. This includes scenarios where dangerous AI models are compromised by malicious actors or exhibit concerning autonomous behavior.
  3. Risk Assessment and Mitigation: Companies must conduct thorough risk evaluations covering catastrophic scenarios, including:
    • Death or injury of more than 100 people
    • Economic damages exceeding $1 billion
    • Assistance in creating biological or chemical weapons
    • Facilitation of large-scale criminal activity
  4. Third-Party Auditing: Conduct third-party audits to ensure compliance with the act.

Enforcement Mechanisms

If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. This enforcement structure provides meaningful deterrent power while avoiding criminal liability.

Safe Harbor Provisions

The Act includes important protections for responsible development, allowing companies to make “appropriate redactions” to safety protocols when necessary to:

  • Protect public safety
  • Safeguard trade secrets
  • Maintain confidential information as required by law
  • Protect employee or customer privacy

Distinguishing Features from California’s SB 1047

For frontier AI models, the New York RAISE Act appears crafted to address specific criticisms of California’s failed SB 1047:

  • No “Kill Switch” Requirement: The RAISE Act does not require AI model developers to include a “kill switch” on their models.
  • No Post-Training Liability: It does not hold companies that post-train frontier AI models accountable for critical harms.
  • Academic Exemptions: Universities and research institutions are excluded from coverage.
  • Startup Protection: The high computational cost threshold ensures smaller companies remain unaffected.

Broader Regulatory Landscape

The RAISE Act represents a broader debate over AI regulation in the United States. Key considerations include:

  • Regulatory Fragmentation: The potential for a patchwork of state regulations creating compliance challenges.
  • Federal Preemption: Ongoing Congressional efforts to establish uniform national standards.
  • International Competitiveness: Balancing safety concerns with maintaining U.S. leadership in AI development.

Legal Implications

Companies operating frontier AI models in New York should consider preparing for potential compliance requirements:

  1. Safety Protocol Documentation: Begin developing comprehensive safety and security protocols that can withstand public scrutiny while protecting proprietary information.
  2. Incident Response Systems: Establish robust systems for detecting, documenting, and reporting safety incidents.
  3. Third-Party Audit Preparation: Identify qualified auditors and establish audit-ready documentation systems.
  4. Legal Review: Conduct thorough legal analysis of current operations under the proposed regulatory framework.

This analysis is based on publicly available information as of June 2025. Legal practitioners should monitor ongoing developments and consult current legislation and regulations for the most up-to-date requirements.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...