AI Safety Laws: A Comparative Look at California and New York

State AI Safety Laws: California and New York

The landscape of artificial intelligence regulation is evolving rapidly, particularly with the recent enactment of AI safety laws in California and New York. These states are taking significant steps to address the potential risks associated with large AI models, aiming to create a safer environment for deployment.

Landmark Convergence

California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) was the first state law to address potentially catastrophic risks from AI. Notably, the New York governor adopted much of California’s provisions to establish a “unified benchmark” for AI regulation across states. This convergence highlights the need for a cohesive approach to AI safety.

Large Developers and Models

Currently, only a limited number of companies and models meet the revenue and compute thresholds established by these laws. These thresholds will undergo annual evaluations and may be revised based on technological advancements and evolving standards.

Unclear Future

Given the recent Executive Order 14365 issued by the White House, the future of state AI laws remains uncertain. The order aims to formulate a national policy framework for AI laws while minimizing regulatory fragmentation. It is unclear whether California and New York’s AI safety laws will face challenges under this executive order.

Key Provisions of the AI Safety Laws

Both California and New York have established frameworks to impose protections for the safe deployment of large “frontier AI models.” Below are key provisions comparing the two laws:

  • Core Requirement: Requires a documented technical and organizational framework for managing risks — similar requirements exist in both states.
  • Models: Covers frontier models defined by specific operational thresholds, with similar definitions utilized.
  • Developers: Defines developers, including large frontier developers with a revenue threshold; similar definitions apply.
  • Risk Definition: Defines catastrophic risk concerning significant harm or damage; the same risk definitions apply.
  • Incident Reporting: California requires reporting critical incidents within 15 days, while New York requires reporting within 72 hours.
  • Public Disclosure: Developers must publish transparency reports; this requirement exists in both states.
  • Penalties: Penalties can reach up to $1 million per violation in California and can be as high as $3 million for subsequent violations in New York.
  • Whistleblower Protections: California protects employees responsible for risk assessment; New York does not include similar protections.

Conclusion

The enactment of the TFAIA in California and the RAISE Act in New York marks a pivotal moment in the regulation of artificial intelligence. These laws not only aim to mitigate catastrophic risks associated with frontier AI models but also set a precedent for other states to follow. As these regulations unfold, their impact on innovation and safety in AI deployment will be closely monitored.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...