2026 State Chatbot Laws: Key Provisions and Regulatory Trends
The landscape of state regulation for chatbots, especially companion chatbots that simulate intimate relationships, has expanded rapidly as of April 2026. Legislatures are addressing concerns about disclosure, mental‑health safety, discrimination, and professional impersonation, turning chatbot deployment from a purely UX decision into a significant compliance and litigation risk.
Recent State Enactments
California SB 243 (Companion Chatbot Law) – Effective 1 January 2026, operators must disclose the non‑human nature of the bot, implement mental‑health crisis protocols, and protect minors by blocking sexual content and enforcing interaction breaks.
Colorado AI Act (SB 24‑205) – Effective 30 June 2026, requires “reasonable care” to prevent algorithmic discrimination in high‑risk AI systems.
Idaho SB 1297 – Adopted April 2026, mirrors Nebraska’s safety model; effective 1 July 2027.
Nebraska LB 525 (Conversational AI Safety Act) – Enacted 14 April 2026; comprehensive safety and transparency duties, effective 1 July 2027.
Oregon SB 1546 – Signed March 2026; mandates AI disclosure, suicide‑ideation detection, crisis referrals, annual filings, and a private right of action with $1,000 statutory damages per violation. Effective 1 January 2027.
Tennessee SB 1580 – Effective 1 July 2026, prohibits AI from presenting itself as a licensed mental‑health professional.
Washington HB 2225 / SB 1546 (Chatbot Disclosure Act) – Effective 1 January 2027, requires mandatory non‑human disclosures and minor safety protocols.
New York AI Companion Models Law (General Business Law Article 47) – Effective 5 November 2025, obliges operators to detect suicidal ideation, provide crisis referrals, and maintain clear AI disclosures.
Key Regulatory Trends
Private Rights of Action – States such as Oregon and Washington now allow individuals to sue chatbot providers directly for statutory damages, increasing litigation exposure.
Transparency and Non‑Human Disclosure – Nearly all recent statutes require upfront notification that users are interacting with AI, with heightened requirements for minors and contexts prone to confusion.
Minor Safety Protocols – Laws demand technical safeguards (e.g., content filtering, mandatory breaks, suicide‑ideation detection) and crisis‑intervention pathways for younger users.
Professional Licensure Restrictions – Several statutes prohibit bots from impersonating licensed doctors, lawyers, or mental‑health professionals, curbing unlicensed practice.
Data Source Disclosure – California legislation now requires labeling of training data provenance for AI‑generated outputs, affecting model developers and retrieval‑augmented systems.
Practical Guidance for Companies
To mitigate risk, organizations should:
- Periodically audit AI tools against evolving state definitions.
- Update user interfaces to ensure clear, prominent non‑human disclosures.
- Test and validate minor safety protocols, including real‑time intent classification for crisis detection.
- Audit chatbot scripts for inadvertent professional impersonation.
- Review data practices and labeling to meet new transparency obligations.
- Amend vendor agreements to allocate compliance responsibilities, liability, and indemnification.
- Assess insurance coverage for chatbot‑related claims.
Bottom Line
State chatbot regulation is diverging rapidly, with a focus on transparency, minor protection, professional impersonation safeguards, and private enforcement rights. Proactive compliance—through continuous monitoring, UI updates, safety testing, and contractual diligence—is essential to avoid enforcement actions and costly litigation in this dynamic legal environment.