Oregon Looks to Regulate AI Chatbots
Lawmakers in Oregon are seizing the opportunity to address the implications of artificial intelligence (AI) chatbots on youth, a matter they believe was overlooked during the regulation of social media. Senator Lisa Reynolds, D-Portland, and the Senate Early Childhood and Behavioral Health committee are advocating for Senate Bill 1546, which aims to establish regulations for AI chatbots.
The committee has voted 4-1 to advance the bill with amendments to the Senate floor. The proposed legislation would mandate AI programs, such as ChatGPT, to remind users more frequently that they are interacting with an AI tool instead of a human.
Context and Concerns
This move follows a recently enacted law in California and similar proposals introduced in New York and Washington. Senator Reynolds, a pediatrician, has noted the challenges parents face in managing electronic usage among their children, particularly as kids increasingly engage with the internet, social media, and AI.
“What comes up frequently in my practice is parents feeling like they are fighting a losing battle,” Reynolds stated.
Assessing Teen AI Use
The use of AI among teenagers is on the rise, with 72% of teens reportedly using AI companions and over 50% engaging with them regularly, according to the nonprofit Common Sense Media. The organization has also found that nearly a third of teens find interactions with AI chatbots equally or more satisfying than conversations with real people.
Robbie Torney, head of AI and digital assessments at Common Sense Media, highlighted that teens often rely on AI chatbots for emotional support or discussions about mental health. However, he warns that these tools frequently overlook critical warning signs that a human might catch.
There have been instances where AI chatbots, including ChatGPT and Character.AI, have been implicated in contributing to teen suicides, as revealed through testimonies from parents before a U.S. Senate committee last year.
Proposed Regulations
The Oregon bill aims to implement additional safeguards for youth access to AI, which include:
- Programmers must indicate that their platform may not be suitable for minors.
- Restricting the display or promotion of sexually explicit content.
- Encouraging limited interaction time with the platform.
Expanding notifications is one way chatbots can promote responsible AI use, according to Linda Charmaraman, a senior research scientist at the Wellesley Centers for Women. She emphasizes the need for educating youth about responsible AI usage rather than imposing outright bans.
“It’s crucial to remind both minors and adults that there are limits to technology and inherent inaccuracies,” she said. “If I could wave a wand, I would focus on AI literacy from an early age.”
Suicide Prevention Focus
In addition to youth access, Reynolds emphasized that the bill aims to protect individuals expressing suicidal tendencies. The legislation would require programmers to implement protocols for their AI platforms to identify signs of suicidal thoughts or self-harm.
These platforms would be mandated to refer users to suicide hotlines and other crisis resources, as well as interrupt conversations between the chatbot and user when necessary. The protocols would also need to be publicly accessible on the AI program’s website.
Reynolds is collaborating with Lines for Life, an Oregon-based suicide and mental health hotline, to potentially integrate their resources into AI chatbots. Hotline volunteers have noted an increase in users in crisis seeking reassurance that they are speaking to a human rather than an AI.
“We know that intervention works,” said Dwight Holton, executive director of Lines for Life. “If we can establish regulations that ensure such connections, we can guide individuals from despair to hope.”
Industry Response
Reynolds has also reached out to organizations like TechNet, which supports a network of tech companies, including Google, OpenAI, and Meta. While generally supportive of the bill, TechNet officials expressed concerns about the frequency of notifications required by Oregon’s legislation compared to other states.
“I am working with a coalition of companies to clarify definitions and requirements regarding notifications and guardrails,” said Rose Feliciano, TechNet’s executive director for Washington and the Northwest.
Legal Challenges Ahead
The bill may encounter legal hurdles if passed due to a December executive order signed by President Donald Trump aimed at limiting state regulation of AI services. Despite uncertainty regarding the order’s implications, Reynolds remains committed to addressing unregulated AI use.
“Social media companies have had opportunities to make choices that prioritize youth safety but have instead focused on maximizing engagement,” Reynolds remarked. “I do not want to wait until it’s too late to implement safeguards on AI tools.”