Lawmakers Push for Stricter AI Chatbot Rules
State lawmakers have initiated a significant legislative response to the rising concerns surrounding artificial intelligence chatbots. Over a dozen bills have been filed or pre-filed with the aim of regulating these technologies amid fears that they could deceive consumers, harm children, or impersonate mental health professionals.
Context and Rising Concerns
The spike in legislative activity reflects a marked increase in public and governmental concern compared to a year ago when chatbot legislation was just beginning to emerge. Recent lawsuits alleging that AI companions have contributed to cases of teen suicide and self-harm have further fueled these concerns.
Companion chatbots are designed to simulate human-like conversations and may engage users on an intimate level. However, their potential to manipulate users, especially vulnerable children, has drawn sharp criticism.
Key Legislative Developments
New York Governor Kathy Hochul has been at the forefront of these efforts, having previously approved the nation’s first regulations for AI companions. She has proposed new legislation to disable AI chatbot features on social media for minors. Additionally, Assemblymember Linda Rosenthal has introduced a bill requiring warnings that interactions with such chatbots “can foster dependency and carry a psychological risk.”
Similar regulatory bills have also surfaced in states like California, Florida, Michigan, Ohio, and others, indicating a nationwide trend towards stricter regulations.
The Parents & Kids Safe AI Act
In an innovative collaboration, OpenAI and Common Sense Media have announced a consolidation of competing chatbot ballot measures in California under the Parents & Kids Safe AI Act. This act aims to prohibit minors from engaging with chatbots that could foster emotional dependence or create the illusion of conversing with a human.
Impact and Future Directions
Despite federal efforts to limit state regulations on AI, state lawmakers are pushing ahead with their initiatives. Notably, Michigan Senate’s Dayna Polehanki is sponsoring a bill to prohibit minors from accessing chatbots that could encourage self-harm or engage in inappropriate conversations.
The increase in legislative measures is not without its critics. Adam Thierer from the R Street Institute warns that this could lead to a “crazy-quilt of conflicting and confusing chatbot policies” across the nation, potentially stifling competition and innovation.
Industry Response
In light of the growing regulatory landscape, companies such as OpenAI and Character.AI have begun implementing measures to enhance the safety of their chatbots for children. OpenAI has introduced parental controls for its chatbot and associated applications, while Character.AI has restricted minors from engaging in open-ended chats.
The ongoing legal battles and regulatory developments highlight the complex intersection of technology, consumer protection, and mental health, indicating that the conversation around AI chatbots is far from over.