California’s Groundbreaking AI Safety Law Sets New Standards

California Becomes First US State to Enact a Dedicated AI Safety Law

The global discussion about artificial intelligence governance has entered a new stage, as California becomes the first state in the US to enact a dedicated AI safety law. This significant development is encapsulated in the Transparency in Frontier Artificial Intelligence Act (TFAIA), which aims to enhance accountability among major AI companies.

Key Requirements of the TFAIA

The TFAIA requires companies such as Google, Meta, and OpenAI to:

  • Report high-risk incidents
  • Disclose safety measures
  • Safeguard whistle-blowers

Additionally, the law mandates companies to publish frameworks that demonstrate how they incorporate safety standards and create mechanisms for reporting critical safety incidents to state authorities.

Potential Impact and Challenges

Columnist Stefanie Schappert noted that while California’s law could be a significant step toward accountability, the lack of federal coordination might lead to “more confusion than clarity.” This raises concerns about the effectiveness of state-level regulations without a unified federal framework.

International Context: India’s Approach

As major economies race to establish AI guardrails, India’s approach stands in contrast to California’s proactive measures. Currently, India has yet to implement a framework specifically dedicated to AI safety. The main objectives of India’s existing initiatives, such as the IndiaAI mission and the proposed Digital India Act, focus primarily on infrastructure development and innovation facilitation.

However, neither of these initiatives addresses crucial elements such as model accountability, ethical use, or AI safety, which are currently being defined in US and European law. Consequently, companies using AI models in vulnerable public systems—like health, finance, and education—are not subject to clear auditing, testing, or disclosure requirements.

Current Gaps in India’s AI Governance

According to Arjun Goswami, director of public policy at the law firm Cyril Amarchand Mangaldas, “India’s current frameworks focus on enabling AI, not regulating it.” This lack of binding safety or accountability obligations means there is little clarity on how companies should manage risks in critical sectors.

The absence of clear regulations creates significant risks, especially concerning issues of bias, data quality, and model explainability. Goswami highlighted a crucial gap in liability: “There’s no clarity on who is responsible if an AI system causes harm, whether it’s the developer, deployer, or end-user.”

Global Perspectives on AI Regulation

Countries worldwide are approaching AI governance with varying strategies. The EU AI Act, expected to be implemented next year, adopts a risk-based strategy that enforces the highest requirements on systems posing significant risks. In contrast, security assessments and algorithm filings are already mandatory in China.

Governments even collaborated at the UK’s AI Safety Summit in 2023 to formulate uniform testing guidelines. Meanwhile, India is adopting a sectoral and voluntary approach, allowing specific ministries to publish AI advisories within their purview. While voluntary ethical guidelines are a useful first step, Goswami stresses that without a legal mandate, there’s no guarantee that high-risk systems will be audited or monitored properly.

The Need for a National AI Governance Framework

Experts warn that fragmented oversight could leave India vulnerable. Varun Singh, founding partner at Foresight Law Offices, emphasizes the necessity for a national AI governance framework that aligns innovation with safety. He suggests starting with mandatory disclosure, incident reporting, and audits for high-risk sectors like health and finance, gradually extending these norms to balance innovation with accountability.

The critical trade-off for governments is how to demand transparency without limiting growth, a dilemma underscored by California’s recent experiment. As AI becomes increasingly integrated into government programs that interact with citizens, the lack of clear safety and redressal mechanisms could escalate risks.

“Waiting too long to define accountability means we’ll end up reacting to crises rather than preventing them,” warns Singh. “India needs a modular, risk-tiered framework that enforces transparency, safety, and explainability while leaving room for innovation and experimentation.”

As California sets a precedent and other jurisdictions follow suit, India faces a crucial window to establish its own balanced approach—one that protects citizens while preserving its competitive edge in the global AI race.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...