Not Human, Not Optional: The New Wave of State AI Chatbot Laws
State lawmakers are rapidly transitioning from mere discussions about AI transparency to implementing specific regulations for chatbots that could necessitate significant product modifications. These regulations include recurring “not human” notices, self-harm escalation protocols, safeguards for minors, audits, and even public reporting.
For companies providing customer-facing chat services or AI that affects critical decisions in employment, credit, housing, healthcare, or education, the associated risks are becoming increasingly tangible. Regulatory requirements are emerging on a state-by-state basis, with potential repercussions ranging from investigations and reputational damage to private lawsuits and injunctions.
The Urgency of Compliance
The key takeaway for businesses is to treat chatbot compliance as an integral part of their product safety and governance strategy, rather than a reactive measure that can be addressed later. This shift is significant as states begin to draft laws that delve into the product’s functionalities, including how AI identity is disclosed, how sensitive conversations are managed, what interactions are logged, and the evidential requirements following incidents.
Categories of Regulation
Most proposed regulations can be categorized into three distinct buckets, each influencing different aspects of chatbot development:
1) Companion/Emotionally Responsive Chatbots: Safety Plus Recurring Disclosures
This category focuses on chatbots designed to feel personal and engaging. Legislatures are particularly concerned with emotionally responsive features that can influence user behavior and engage minors. Recent proposals from the Pacific Northwest, such as Oregon SB 1546 and Washington HB 2225, emphasize the need for clear identification that the user is interacting with AI, repeated reminders during long sessions, and effective self-harm escalation pathways.
The business risks associated with this bucket are direct and substantial, as failure to comply could lead to legal obligations and customer dissatisfaction, ultimately resulting in reputational harm.
2) High-Impact Decision Chatbots: Governance When AI Influences Outcomes
This category addresses chatbots that significantly impact user outcomes. When a chatbot plays a role in determining eligibility, ranking, access, or pricing in critical areas, regulators will demand scrutiny regarding its functionality, testing, monitoring for drift, and protocols for addressing errors. An example of this trend is Colorado’s SB 24-205, which emphasizes the need for comprehensive governance practices.
3) Employment Use Cases: Bias Audits Plus Notices
Employment applications of AI have already prompted operational regulations, spearheaded by initiatives like NYC Local Law 144, which mandates bias audits and disclosure notices for automated hiring tools. Companies using chatbots for screening, interviewing, or scoring must be prepared to provide clear explanations of how their systems work, how they were assessed, and how users can challenge outcomes.
Implications for Product Teams
A recurring theme in these regulations is the necessity for proof. Companies will need to substantiate their policies with tangible evidence, including documentation of shipped products, testing results, monitoring activities, and responses to issues that arise. This requires meticulous logging of decisions, clear delineation of responsibilities, and thorough documentation to meet both customer and regulatory expectations.
Actionable Takeaways
- Conduct a chatbot inventory that is recognizable to regulators. Document what chatbots are in use, their purposes, and whether any features resemble “companion-like” attributes or influence high-impact decisions.
- Establish a disclosure standard now. Determine the content of the AI identity notice, its placement, and how repeat reminders will be managed during extended interactions.
- Design safety protocols for responding to self-harm cues and interactions with minors, clarifying ownership of escalation procedures and logging practices.
- Identify areas where chatbots influence outcomes. If chatbot interactions affect hiring, credit, housing, healthcare, or educational decisions, integrate governance into product design, emphasizing testing, monitoring, and review processes.
- Prepare for ongoing regulatory changes. Multistate regulations will evolve continuously; plan for feature flags, flexible disclosures, and a release process that accommodates new requirements without causing last-minute disruptions.
In conclusion, the landscape of state AI chatbot laws is rapidly evolving, and organizations must proactively adapt to these changes to mitigate risks and ensure compliance.