AI Guardrails Will Stop Being Optional in 2026
The rise of artificial intelligence in 2025 was historic by any measure, and its widespread adoption has forced debate and policy to come together quickly to help with safe, fair use and governance.
As we enter a new year, AI regulation in the U.S. is no longer an abstract debate. It has become an operational reality.
Until now, discussions around AI governance have lived in white papers, statements of principle, and future-facing roadmaps. However, on Jan. 1, California moved that conversation into production.
California’s New Laws
Two new state laws are now in effect, focusing on a deceptively simple question: What happens when an AI system talks directly to a person? If an AI system answers questions, offers guidance, or sustains ongoing conversations with users in California, these laws apply — regardless of where the organization that built it is headquartered.
California lawmakers are not trying to regulate model architectures or training techniques. They are regulating something far harder to control: how AI behaves once it is deployed and interacting with real people, in real situations, under real pressure.
Regulating AI Behavior, Not AI Theory
The new laws, SB 243 and AB 489, share a common assumption: that AI systems will encounter edge cases. Experts and lawmakers see functionality issues where conversations will drift, and users will bring emotional, medical, or high-stakes questions into contexts the system was not designed to address.
Static policies written months earlier will not cover every scenario. Rather than banning conversational AI, California’s approach is pragmatic. If an AI system influences decisions or builds emotional rapport with users, it must have safeguards that hold up in production, not just in documentation. This is an area where many organizations are least prepared.
SB 243: When a Chatbot Becomes a Companion
SB 243, signed in October 2025, targets what lawmakers call “companion AI,” or systems designed to engage users over time rather than answer a single transactional question. These systems can feel persistent, responsive, and emotionally attuned. Over time, users may stop perceiving them as tools and start treating them as a presence. That is the risk SB 243 attempts to address.
The law establishes three core expectations:
- Continuous AI Disclosure: If a reasonable person could believe they are interacting with a human, the system must clearly disclose that it is AI, not just once, but repeatedly during longer conversations. For minors, the law requires frequent reminders and encouragement to take breaks.
- Serious Conversations: When users express suicidal thoughts or self-harm intent, systems are expected to recognize that shift and intervene. They must halt harmful conversational patterns, trigger predefined responses, and direct users to real-world crisis support.
- Accountability Post-Launch: Beginning in 2027, operators must report how often these safeguards are triggered and how they perform in practice. SB 243 also introduces a private right of action, significantly raising the stakes for systems that fail under pressure.
The message from this governance is clear: Good intentions are not enough if the AI says the wrong thing at the wrong moment.
AB 489: When AI Sounds Like a Doctor
AB 489 focuses on AI systems that imply medical expertise without actually having it. Many health and wellness chatbots rely on tone, terminology, or design cues that feel clinical and authoritative. For users, those distinctions are often invisible or undecipherable.
Starting Jan. 1, AB 489 prohibits AI systems from using titles, language, or other representations that suggest licensed medical expertise unless that expertise is genuinely involved. Describing outputs as “doctor-level” or “clinician-guided” without factual backing may constitute a violation.
Small cues that could mislead users may count as violations, with enforcement extending to professional licensing boards. This creates a familiar engineering challenge: developing tech that balances being informative and helpful versus authoritative. Under AB 489, that line matters.
From Governance Frameworks to Runtime Control
SB 243 and AB 489 mark a shift in how AI governance will be enforced; for now, only at the state level. Regulators are looking at live behavior, focusing on what the AI actually says in context during user interactions.
These new laws move AI governance out of compliance binders and into production systems. For most organizations, compliance does not require rebuilding models from scratch, but necessitates control at runtime — the ability to intercept unsafe, misleading, or noncompliant outputs before they reach users.
This is where AI security and AI governance converge. Runtime guardrails make regulatory requirements actionable. Instead of hoping a model behaves as intended, teams can define explicit boundaries for sensitive scenarios, monitor interactions as they happen, and intervene when conversations drift into risk.
A Special Note on the Federal Policy Picture
A December 2025 executive order directing federal agencies to review state-level AI regulations has raised questions about preemption. For now, the operational reality remains straightforward: executive orders do not override state law. There is no federal AI statute that preempts California’s rules.
Organizations that invest now in controlling AI behavior in production will be better positioned for the new policy realities of 2026 and the next wave of AI regulation. If your AI systems already talk to users, this is the moment to decide what they are allowed to say — and what should never leave the system at all.