Oregon’s Landmark Chatbot Law: Understanding Your Compliance Risks

Oregon SB 1546: The First Chatbot Law With Real Teeth

If your company utilizes an AI-powered tool that remembers customer interactions, engages in follow-up questions, and maintains personal conversations, Oregon’s new chatbot safety law may apply to you. SB 1546, which received near-unanimous support in both legislative chambers on March 5, introduces a private right of action with significant statutory damages of $1,000 per violation. If signed by Governor Kotek, this law will take effect on January 1, 2027.

Scope of SB 1546

The bill defines an “operator” broadly, covering anyone who “controls or makes available” a covered AI system in Oregon. This definition encompasses various sectors, including:

  • Healthcare systems using patient engagement chatbots
  • Universities deploying AI tutoring companions
  • Retailers with support portals following up on recent purchases

Oregon’s law stands out within a wave of state chatbot legislation due to its private right of action, making it the first with genuine enforcement capability.

Requirements Under SB 1546

The bill specifically targets “AI companions”, defined as:

[A system] that uses artificial intelligence, generative AI, or algorithms recognizing emotion to simulate a sustained, human-like relationship with a user.

Operators of these systems must adhere to several critical requirements:

  • Disclose AI involvement to users
  • Detect expressions of suicidal ideation and interrupt conversations for crisis referrals
  • File annual reports with the Oregon Health Authority

For minors, additional requirements include:

  • Hourly AI reminders
  • Prohibition of sexually explicit content
  • Avoiding techniques that create emotional dependency

The unique requirement for mandatory conversation interruption differentiates Oregon’s law from California’s SB 243, which does not mandate active interruption.

Eligibility for Companies

An AI system qualifies as an “AI companion” if it meets a three-prong test:

  1. Retains information from prior interactions to personalize engagement
  2. Asks unsolicited questions regarding emotional topics
  3. Sustains ongoing dialogue about personal matters

Companies that may be caught off-guard are not the AI vendors themselves but enterprises that have integrated AI tools into customer-facing workflows without monitoring how these tools evolved. For instance:

  • A patient portal chatbot initially designed for appointment scheduling may now ask wellness check-in questions.
  • A financial planning chatbot could shift from FAQ responses to discussing life goals.
  • An HR onboarding assistant may personalize interactions based on new hire experiences.

Basic customer service bots and general-purpose assistants do not fall under this law, but the boundary becomes blurred for tools that have evolved through vendor updates.

Litigation Exposure

The private right of action sets a low standard for standing and a high ceiling for damages. A person who experiences an ascertainable loss as a result of a violation can pursue damages of $1,000 per violation. The statute does not define “violation”, leading to substantial exposure. If each conversation session counts as a violation, the risk is high, and if each message within a session counts, claims could reach tens of thousands of dollars.

Oregon’s law follows the structural template of Illinois’ BIPA, which has led to thousands of lawsuits and substantial settlements.

Recommended Actions for Companies

Companies should undertake the following actions to prepare for SB 1546:

  • Audit your vendor stack: Map every third-party AI tool against the three-prong definition, especially those that have recently added personalization features.
  • Review vendor contracts: Existing agreements may not address the new obligations; companies should confirm liability allocation for detection failures.
  • Check your insurance: Review tech E&O and cyber policies to ensure coverage for chatbot-related statutory damages.

Oregon’s legislation signals a broader trend, with 78 chatbot bills across 27 states in 2026. The introduction of a private right of action represents a critical shift, emphasizing the need for proactive compliance rather than reactive measures.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...