Oregon Looks to Regulate AI Chatbots
Lawmakers in Oregon are seizing the opportunity to regulate the rapidly evolving technology of artificial intelligence (AI) chatbots, aiming to address the effects these tools have on youth. This initiative is led by Sen. Lisa Reynolds, a pediatrician from Portland who chairs the Senate Early Childhood and Behavioral Health committee.
Senate Bill 1546
The committee has advanced Senate Bill 1546 with a 4-1 vote, proposing that AI programs like ChatGPT must regularly remind users that they are interacting with an AI, not a human being. This legislation follows similar laws passed in California and proposed in New York and Washington.
The Impact on Youth
With 72% of teens reportedly using AI companions and over 50% as regular users, the influence of AI chatbots is undeniable. Research from Common Sense Media indicates that nearly a third of teens find conversations with AI chatbots as satisfying, if not more so, than real-life interactions. This raises concerns, as Robbie Torney, head of AI and digital assessments at Common Sense Media, points out that AI often misses subtle cues that a human would catch, potentially leading to harmful consequences.
There have been troubling instances where AI chatbots, including ChatGPT, have been linked to cases of teen suicides, prompting parents to testify before a U.S. Senate committee about these alarming trends.
Proposed Safeguards
The Oregon bill seeks to introduce additional safeguards for youth access to AI tools. Key measures include:
- Programmers must indicate that the platform may not be suitable for minors.
- Prohibition of sexually explicit content.
- Encouragement to limit time spent interacting with the platform.
Linda Charmaraman, a senior research scientist at the Wellesley Centers for Women, advocates for expanding notifications to promote responsible AI use rather than imposing outright bans. She emphasizes the need for AI literacy from an early age.
Focus on Suicide Prevention
Furthermore, the bill aims to protect individuals expressing suicidal thoughts. It requires AI platforms to develop protocols that can detect signs of suicidal ideation and refer users to crisis resources such as hotlines. The bill mandates that these protocols be publicly shared on the AI program’s website.
Reynolds has collaborated with Lines for Life, an Oregon-based suicide and mental health hotline, to explore how AI chatbots can integrate mental health resources effectively. According to Dwight Holton, executive director of Lines for Life, youth volunteers have frequently had to clarify to users that they are communicating with a human, not an AI.
Industry Response
While many in the tech industry, including companies represented by TechNet, express support for the bill, concerns have been raised regarding the frequency of notifications compared to other states. Amendments have been made to align Oregon’s requirements with those in other regions.
Legal Challenges Ahead
However, the bill could face legal hurdles due to a December executive order from President Trump that limits state regulation of AI services. Despite uncertainties surrounding this executive order, Reynolds remains committed to addressing the unregulated use of AI.
As Reynolds aptly notes, “Social media companies have had the opportunity to make some choices that would have kept kids safe from social media, but instead, they double down on keeping kids engaged with content.” She emphasizes the urgency of implementing safeguards for AI tools before it becomes too late.