Legislative Committee Hears Proposals to Regulate AI Chatbots
A legislative committee convened on Monday to discuss significant proposals aimed at regulating AI chatbots. The primary focus was to mandate that AI services disclose their non-human status to users, particularly minors, and to require companies to present their safety plans for these technologies.
Key Proposals
Sen. Eliot Bostar introduced legislation (LB1185) that stipulates AI providers must inform users under 18 that they are interacting with a non-human entity. This disclosure must occur at the beginning of each session and at least every three hours thereafter. The bill also prohibits AI chatbots from providing sexual content or simulating romantic relationships. Additionally, it mandates that if users discuss self-harm or suicide, AI providers must refer them to appropriate crisis resources.
In support of the bill, author and clinical psychologist Mary Pipher emphasized the societal changes since the early 2000s, highlighting the rapid evolution of technology and its effects on mental health. “Since 2016 till now, we’ve had the rise of AI and chatbots… presenting unique challenges that humans have never faced before,” Pipher stated.
Industry Support
Support for the legislation also came from representatives of Google and Tech Nebraska. Emily Allen, a representative from Tech Nebraska, described the bill as a “constructive starting point for smart regulation,” emphasizing the need to protect people while fostering innovation in the tech sector. “Tech and AI are evolving faster than any legislative body can realistically keep pace with,” Allen noted, praising the bill’s reasonable approach.
Additional Legislation
The committee also reviewed another bill (LB1083) led by Sen. Tanya Storer, which aims to require large chatbot developers to implement measures for public safety and child protection. This legislation arises from alarming incidents, such as the suicide of a California teenager who received harmful instructions from an AI chatbot.
Storer clarified that the bill does not impose bans or specific technical requirements on AI companies. Instead, it mandates that major developers disclose how they assess and manage risks associated with their technologies. “What it does is require the largest AI developers to tell us how they are managing these risks,” she explained.
Opposition and Concerns
While there was no vocal opposition during the hearing, Luisa Smith from Tech Nebraska expressed concerns regarding the bill’s broad regulatory scope, which she fears could conflict with federal efforts. She also highlighted potential violations of the separation of powers due to the provisions allowing the attorney general to alter definitions.
Legislative Compromise
During the broader legislative debate, an example of compromise emerged between Senators Mike Moser and John Cavanaugh. Moser’s bill (LB397) aimed to eliminate the requirement for companies to maintain safety committees, citing a lack of funding for over 20 years. Critics, however, argued that federal standards do not extend to public employees.
In a rare moment of bipartisanship, Cavanaugh collaborated with Moser to amend the bill, ensuring the requirement for safety committees remains in place for public employees, allowing for negotiation as part of collective bargaining. “Better write this down. I’m recommending that you vote for a Cavanaugh amendment,” Moser remarked, acknowledging the unusual collaboration.
Conclusion
The committee took no immediate action on the proposed bills, but the discussions indicate a significant shift towards establishing regulatory frameworks for AI technologies. As the legislative landscape grapples with the rapid advancements in AI, the balance between innovation and safety remains a critical challenge.