Oregon’s Move to Regulate AI Chatbots for Youth Safety

Oregon Looks to Regulate AI Chatbots

Lawmakers in Oregon are seizing the opportunity to regulate the rapidly evolving technology of artificial intelligence (AI) chatbots, aiming to address the effects these tools have on youth. This initiative is led by Sen. Lisa Reynolds, a pediatrician from Portland who chairs the Senate Early Childhood and Behavioral Health committee.

Senate Bill 1546

The committee has advanced Senate Bill 1546 with a 4-1 vote, proposing that AI programs like ChatGPT must regularly remind users that they are interacting with an AI, not a human being. This legislation follows similar laws passed in California and proposed in New York and Washington.

The Impact on Youth

With 72% of teens reportedly using AI companions and over 50% as regular users, the influence of AI chatbots is undeniable. Research from Common Sense Media indicates that nearly a third of teens find conversations with AI chatbots as satisfying, if not more so, than real-life interactions. This raises concerns, as Robbie Torney, head of AI and digital assessments at Common Sense Media, points out that AI often misses subtle cues that a human would catch, potentially leading to harmful consequences.

There have been troubling instances where AI chatbots, including ChatGPT, have been linked to cases of teen suicides, prompting parents to testify before a U.S. Senate committee about these alarming trends.

Proposed Safeguards

The Oregon bill seeks to introduce additional safeguards for youth access to AI tools. Key measures include:

  • Programmers must indicate that the platform may not be suitable for minors.
  • Prohibition of sexually explicit content.
  • Encouragement to limit time spent interacting with the platform.

Linda Charmaraman, a senior research scientist at the Wellesley Centers for Women, advocates for expanding notifications to promote responsible AI use rather than imposing outright bans. She emphasizes the need for AI literacy from an early age.

Focus on Suicide Prevention

Furthermore, the bill aims to protect individuals expressing suicidal thoughts. It requires AI platforms to develop protocols that can detect signs of suicidal ideation and refer users to crisis resources such as hotlines. The bill mandates that these protocols be publicly shared on the AI program’s website.

Reynolds has collaborated with Lines for Life, an Oregon-based suicide and mental health hotline, to explore how AI chatbots can integrate mental health resources effectively. According to Dwight Holton, executive director of Lines for Life, youth volunteers have frequently had to clarify to users that they are communicating with a human, not an AI.

Industry Response

While many in the tech industry, including companies represented by TechNet, express support for the bill, concerns have been raised regarding the frequency of notifications compared to other states. Amendments have been made to align Oregon’s requirements with those in other regions.

Legal Challenges Ahead

However, the bill could face legal hurdles due to a December executive order from President Trump that limits state regulation of AI services. Despite uncertainties surrounding this executive order, Reynolds remains committed to addressing the unregulated use of AI.

As Reynolds aptly notes, “Social media companies have had the opportunity to make some choices that would have kept kids safe from social media, but instead, they double down on keeping kids engaged with content.” She emphasizes the urgency of implementing safeguards for AI tools before it becomes too late.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...