New York’s RAISE Act and the Misunderstanding of AI Safety

New York’s RAISE Act Misses AI Safety Risks

The Responsible AI Safety and Education (RAISE) Act proposed by New York State aims to protect individuals from the harms associated with artificial intelligence (AI). However, it raises concerns by focusing primarily on the AI models themselves as the key points of leverage for ensuring safety. This approach risks transforming a technical challenge into a bureaucratic burden.

Overview of the RAISE Act

The RAISE Act, authored by State Assemblymember Alex Bores, establishes a set of requirements intended to ensure that AI technologies are deployed responsibly. It is currently under debate in the legislative committee. Legislators, including Bores, express fears that advanced AI may facilitate the creation of chemical, biological, and nuclear weapons. However, the greater risks are associated with the accessibility of dangerous precursor materials rather than the AI systems themselves.

Requirements and Compliance

Similar to California’s SB 1047, the RAISE Act targets advanced “frontier models”—AI systems that meet specific computational thresholds and cost over $100 million to train. Developers of these covered models must adhere to various stringent requirements, including:

  • Mandatory testing procedures and risk mitigation strategies
  • Regular third-party audits
  • Transparency mandates
  • Reporting of instances where a system has facilitated dangerous incidents
  • Retention of detailed testing records for five years
  • Annual protocol reviews and updates
  • Prohibition against deploying “unreasonably” risky models
  • Protection of employee whistleblower rights

Violations of these requirements carry significant penalties, starting at 5% of compute costs for a first violation and escalating to 15% for subsequent infractions, potentially resulting in fines ranging from $5 million to $15 million.

Challenges in Model Alignment

The RAISE Act’s premise is to align the profit motives of companies with public safety interests. However, aligning AI models to prevent misuse has proven to be a complex challenge. Model alignment tends to be more effective in mitigating accidental harms, such as generating misleading advice or incorrect information, rather than in curtailing malicious activities.

Experts from Princeton University, including computer scientists Arvind Narayanan and Sayash Kapoor, have highlighted the concept of model brittleness. They argue that even if AI models can be engineered to be “safe,” they can still be exploited for harmful purposes.

Emerging Approaches in AI Safety

Current strategies for maintaining model alignment involve the use of external systems that operate atop the models. Leading companies are investing in the development of:

  • External content filters
  • Human oversight protocols
  • Real-time monitoring systems

These measures aim to detect and prevent harmful outputs, indicating that the market is advancing more rapidly than the existing regulatory frameworks.

Regulatory Burden vs. Actual Safety

The RAISE Act imposes extensive requirements to achieve its objectives. For instance, if robust safety protocols are functioning effectively, the necessity for five years of record-keeping comes into question. Furthermore, if a model successfully passes an independent audit, the rationale behind requiring developers to meet separate standards for “reasonableness” in deployment is unclear.

These inconsistencies go beyond mere bureaucratic inefficiencies. The RAISE Act attempts to address various complex issues including corporate transparency, employee protections, technical safety, and liability within a single regulatory framework. This broad approach risks prioritizing compliance over genuine safety outcomes.

Cost of Compliance

Policymakers often underestimate the compliance costs associated with AI legislation. Previous analyses have shown that official projections may not capture the true financial burden of compliance. Using advanced large language models (LLMs) to evaluate the RAISE Act, estimates suggest that initial compliance may require between 1,070 and 2,810 hours of labor—effectively necessitating a full-time employee. For subsequent years, ongoing compliance burdens could range from 280 to 1,600 hours annually.

This significant variance in compliance estimates underscores the inherent uncertainty surrounding the RAISE Act and similar legislation. The rapid evolution of sophisticated AI technologies indicates a pressing need for laws that prioritize effective risk mitigation rather than regulatory theater.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...