California Becomes First US State to Enact a Dedicated AI Safety Law
The global discussion about artificial intelligence governance has entered a new stage, as California becomes the first state in the US to enact a dedicated AI safety law. This significant development is encapsulated in the Transparency in Frontier Artificial Intelligence Act (TFAIA), which aims to enhance accountability among major AI companies.
Key Requirements of the TFAIA
The TFAIA requires companies such as Google, Meta, and OpenAI to:
- Report high-risk incidents
- Disclose safety measures
- Safeguard whistle-blowers
Additionally, the law mandates companies to publish frameworks that demonstrate how they incorporate safety standards and create mechanisms for reporting critical safety incidents to state authorities.
Potential Impact and Challenges
Columnist Stefanie Schappert noted that while California’s law could be a significant step toward accountability, the lack of federal coordination might lead to “more confusion than clarity.” This raises concerns about the effectiveness of state-level regulations without a unified federal framework.
International Context: India’s Approach
As major economies race to establish AI guardrails, India’s approach stands in contrast to California’s proactive measures. Currently, India has yet to implement a framework specifically dedicated to AI safety. The main objectives of India’s existing initiatives, such as the IndiaAI mission and the proposed Digital India Act, focus primarily on infrastructure development and innovation facilitation.
However, neither of these initiatives addresses crucial elements such as model accountability, ethical use, or AI safety, which are currently being defined in US and European law. Consequently, companies using AI models in vulnerable public systems—like health, finance, and education—are not subject to clear auditing, testing, or disclosure requirements.
Current Gaps in India’s AI Governance
According to Arjun Goswami, director of public policy at the law firm Cyril Amarchand Mangaldas, “India’s current frameworks focus on enabling AI, not regulating it.” This lack of binding safety or accountability obligations means there is little clarity on how companies should manage risks in critical sectors.
The absence of clear regulations creates significant risks, especially concerning issues of bias, data quality, and model explainability. Goswami highlighted a crucial gap in liability: “There’s no clarity on who is responsible if an AI system causes harm, whether it’s the developer, deployer, or end-user.”
Global Perspectives on AI Regulation
Countries worldwide are approaching AI governance with varying strategies. The EU AI Act, expected to be implemented next year, adopts a risk-based strategy that enforces the highest requirements on systems posing significant risks. In contrast, security assessments and algorithm filings are already mandatory in China.
Governments even collaborated at the UK’s AI Safety Summit in 2023 to formulate uniform testing guidelines. Meanwhile, India is adopting a sectoral and voluntary approach, allowing specific ministries to publish AI advisories within their purview. While voluntary ethical guidelines are a useful first step, Goswami stresses that without a legal mandate, there’s no guarantee that high-risk systems will be audited or monitored properly.
The Need for a National AI Governance Framework
Experts warn that fragmented oversight could leave India vulnerable. Varun Singh, founding partner at Foresight Law Offices, emphasizes the necessity for a national AI governance framework that aligns innovation with safety. He suggests starting with mandatory disclosure, incident reporting, and audits for high-risk sectors like health and finance, gradually extending these norms to balance innovation with accountability.
The critical trade-off for governments is how to demand transparency without limiting growth, a dilemma underscored by California’s recent experiment. As AI becomes increasingly integrated into government programs that interact with citizens, the lack of clear safety and redressal mechanisms could escalate risks.
“Waiting too long to define accountability means we’ll end up reacting to crises rather than preventing them,” warns Singh. “India needs a modular, risk-tiered framework that enforces transparency, safety, and explainability while leaving room for innovation and experimentation.”
As California sets a precedent and other jurisdictions follow suit, India faces a crucial window to establish its own balanced approach—one that protects citizens while preserving its competitive edge in the global AI race.