COMMENTARY: New California AI Laws That Matter to You in 2026
As we head into 2026, it is clear that 2025 will be remembered as a turning point: the year artificial intelligence (AI) became a regular part of daily life. Technologies such as ChatGPT have become as ubiquitous as Google, and innovations like Waymo cars are a familiar sight on big city streets. The challenge of distinguishing between real and AI-generated images or videos is more significant than ever.
Every sector, including entertainment, finance, and health care, is grappling with the implications of AI, and education is no exception. It can be tempting to ignore these changes and hope for the best, but whether welcomed or not, AI is here and accelerating. Its future trajectory depends on the engagement of educators, families, and policymakers alike.
People often categorize themselves as either “AI optimists” or “AI pessimists”. The reality is more complex, and one can be both. There is incredible potential for:
- Personalized learning for educators
- Access to support for students
- Equitable and efficient operations for schools
However, there are also significant risks, especially when innovation outpaces the guardrails meant to protect the public. In their book “Governing the Machine,” leading AI policy experts argue that widespread adoption of AI relies on public trust, which is built through thoughtful regulation. After all, would we board a plane or drive a car without confidence in the safety standards behind them?
California’s Legislative Response
In this context, California lawmakers have made meaningful strides in AI policy. While the 2025 legislative session was not perfect, it produced a growing list of laws aimed at establishing public-interest safeguards as AI use expands. Here’s a snapshot of key AI bills from 2025:
- SB 53 (Wiener) — Transparency in Frontier AI Act: Requires developers of large-scale “frontier” AI models to publish safety protocols, report major incidents, and protect whistleblowers. Signed into law.
- SB 243 (Padilla) — AI Chatbot Safeguards for Minors: Mandates disclosures for AI “companion” tools used by minors and requires safeguards against harmful content. Signed into law.
- SB 11 (Ashby) — AI & Digital Replicas: Requires warnings when AI tools can generate realistic fake media and directs courts to examine standards for AI-generated evidence. Became law without the governor’s signature.
Additionally, two bills that passed through the Legislature were vetoed by the governor. The first, the LEAD for Kids Act (AB 1064), aimed to restrict AI companion chatbots likely to promote self-harm, violence, or sexual content. The governor vetoed it over concerns about the broad language used. This highlights the complexities of legislating fast-moving technology: how to safeguard young people without creating sweeping rules that may lead to unintended consequences.
The second, the No Robo Bosses Act (SB 7), sought to require disclosure when AI is used in hiring or disciplinary decisions and to prohibit sole reliance on automated systems. The governor vetoed this bill, describing the proposed regulations as “unfocused” and lacking targeted solutions for the risks posed by AI in the workplace. Lawmakers are expected to revise it this year to address these concerns.
Looking Ahead
As lawmakers reconvene for the 2026 session, it is imperative to prioritize student-centered AI policies that emphasize safety, transparency, and educator support. The challenges facing schools extend beyond issues of plagiarism or cheating; they encompass digital safety, the erosion of critical thinking skills (cognitive offloading), and preparing young people for a future shaped by AI.
AI is not waiting for us to catch up, and neither should our policies. With limited federal action, the spotlight is on California. In the world’s fourth-largest economy, leadership is not optional; it is a responsibility. Our youth cannot afford for education leaders to remain on the sidelines.