Key Compliance Changes for AI Companion Regulations in Washington and Oregon

Washington and Oregon Regulate AI Companions: Key Compliance Changes

New laws in Washington and Oregon regulating consumer-facing interactive AI companions will introduce expansive requirements for businesses operating in either state. Set to take effect on January 1, 2027, these statutes necessitate operators to adopt heightened transparency measures, implement crisis detection protocols, and deploy enhanced safeguards for minors. Businesses should assess their AI chatbots or platforms for compliance readiness before the laws enter into force.

Key Takeaways

  • Washington and Oregon have enacted comprehensive laws regulating AI companions and imposing new compliance requirements on operators.
  • Both statutes require clear disclosures, crisis intervention protocols, and special safeguards for minors.
  • The laws establish private rights of action, significantly increasing litigation risk for businesses deploying interactive AI, with Oregon providing statutory damages of $1,000 per violation.
  • Companies must review and update their AI systems to ensure compliance before the January 1, 2027 effective date.

Background

Recent advances in generative and conversational AI technology have enabled the development of “AI companions,” systems capable of sustaining emotionally adaptive, human-like interactions with users. Legislatures in Oregon and Washington view these systems as having serious risks, particularly for minors, including emotional dependency, manipulation, and exposure to inappropriate or harmful content.

The statutes target these risks with new disclosure requirements aimed at promoting transparency, user safety, and responsible innovation. These laws are part of a steady increase in AI regulation over the last few years. For instance, a similar law in California went into effect on January 1, 2026, creating a private right of action and regulating many of the same areas as Oregon and Washington.

Scope and Applicability

The new laws in Oregon and Washington apply broadly to operators, defined in both statutes as any person or entity that makes available or controls access to an AI companion or companion platform for users in the respective state. “AI companion” encompasses systems that use artificial intelligence or algorithms to simulate sustained human-like platonic, intimate, or romantic relationships, including through personalized dialogue and retention of user preferences across sessions.

Exemptions

Both laws contain specific exclusions. For example, software used solely for customer service, technical support, business operations, or productivity falls outside of both statutes. Narrowly tailored video game features are also generally beyond the statutes’ reach, provided they do not simulate ongoing personal relationships or generate responses on topics unrelated to their core functions, such as mental health. Both laws explicitly exclude a “stand-alone consumer electronic device that functions as a speaker and voice command interface,” but Washington limits that exclusion to devices that do not sustain a relationship across multiple interactions or generate outputs likely to elicit emotional responses from the user.

Disclosure Requirements

Oregon and Washington’s statutes create different safeguards depending on the age of the user. For all users, both statutes require operators to provide “clear and conspicuous” disclosures that users are interacting with artificially generated output and not a human being. In Washington, operators must issue this notification at the start of every interaction and at least every three hours during ongoing use (every hour for minors or platforms directed to minors). Oregon applies a “reasonable person” standard, mandating disclosure if a user would believe they are interacting with a natural person.

For minors using AI companions, operators must implement measures to prevent the companions from making false claims of being human or sentient, simulating emotional dependence, or engaging in romantic or sexual innuendo with minors. While Washington’s law regarding minors is triggered only if an operator “knows” the user is a minor, Oregon’s law is broader, covering operators who know or have “reasons to believe” a user is a minor.

In Oregon, additional requirements for minors include periodic reminders to take breaks and prohibitions on generating certain types of statements or visual content. For example, if a minor in Oregon indicates a desire to end the conversation, an AI chatbot cannot generate a message that “simulates emotional distress.” Similarly, Washington’s statute prohibits manipulative engagement techniques, including encouraging minors to withhold information from trusted adults.

Mental Health Detection and Crisis Response Protocols

Operators in both states must establish, implement, and publicly disclose protocols for detecting and responding to user expressions of suicidal ideation, suicidal intent, or self-harm before making AI companions available. These protocols must use evidence-based or reasonable methods to identify relevant inputs and provide referrals to crisis resources, including the national 9-8-8 suicide and crisis lifeline or, for minors, youth peer support lines.

Operators are required to prevent the generation of content that encourages or describes self-harm and publish annual reports detailing their crisis intervention protocol and the number of referrals made, excluding any personal user information. Oregon further mandates that operators employ clinical best practices for additional interventions if users continue to express suicidal ideation after receiving a referral.

Enforcement and Private Right of Action

Both statutes are enforceable through private rights of action, which increases litigation risk for operators. Oregon’s law allows individuals who suffer ascertainable loss or “other injury in fact” due to a violation to recover the greater of actual damages or $1,000 per violation, in addition to injunctive relief and attorney fees. The law does not define what constitutes a “violation,” creating potential for cumulative claims arising from each instance of noncompliance.

Washington’s law does not provide for statutory damages. Instead, it treats violations as unfair or deceptive acts under the Washington Consumer Protection Act, with the potential for actual damages, trebling of damages, injunctive relief, and fee-shifting.

As Compared to California’s Law

Businesses already complying with California’s AI companion law should not assume they will be in compliance with the Washington and Oregon laws. Some differences include:

  • Disclosure Requirements for Minors: California’s law requires AI operators to disclose that “companion chatbots may not be suitable for some minors” and, if an operator knows a user is a minor, disclose that the minor is interacting with a chatbot and recommend taking breaks every three hours. Unlike Washington and Oregon’s laws, California’s law does not require more robust prohibitions on manipulative engagement techniques employed by some chatbots.
  • Disclosure Requirements for All Users: California’s law uses a “reasonable person” standard to govern whether an AI operator must disclose that the chatbot is artificially generated and not human. Washington’s law takes things a step further and mandates disclosure in all contexts.
  • Published Protocols: While all three laws require AI operators to annually report their protocols for addressing suicidal ideation by users, Washington and Oregon require those reports to be made accessible, at a minimum, on the AI operator’s website. California requires operators to send the report to the Office of Suicide Prevention.

Next Steps

The Washington and Oregon AI companion statutes mark a significant new chapter in the regulation of emotionally adaptive AI technologies offered to consumers. With expansive requirements, private enforcement mechanisms, and the potential for substantial statutory damages, these laws will have far-reaching implications for businesses deploying interactive AI.

Businesses should consider taking the following steps to prepare in advance of the January 1, 2027 effective date:

  • Review all consumer-facing AI systems to determine whether they may be classified as AI companions.
  • Evaluate existing disclosures and consider implementing new user disclosures to ensure compliance with notification requirements, especially for users known or likely to be minors.
  • Adopt public evidence-based protocols for detecting and responding to suicidal ideation or self-harm and prepare for annual reporting obligations.
  • Implement robust safeguards against the generation of inappropriate content, manipulative engagement tactics, or misleading representations of the AI’s identity or nature for any platforms that have users who are minors.
  • Monitor for further regulatory guidance or enforcement actions as interpretive clarification is likely to materialize over time.
  • Consult with counsel regarding compliance strategies and exposure.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...