Skip to main content
A A A

Article

This post was originally published on the Oregon State Bar Technology Section blog.

States continue to race to keep up with each other and with technological innovations in privacy, data protection, and artificial intelligence.

In Oregon’s 2026 “short session,” legislators passed a targeted AI companion bill (SB 1546) promoted by a coalition of suicide prevention and mental health advocates. Despite being only three pages long, the bill packs a punch, regulating AI chatbots primarily as a consumer protection and public health issue, rather than through the more common privacy lens. The legislation reflects growing concern among policymakers about the psychological and safety risks posed by increasingly human-like conversational AI systems, particularly for minors.

The law goes into effect on January 1, 2027.

Overview of SB 1546

SB 1546 regulates operators of “artificial intelligence companions,” defined as AI systems that simulate a sustained human-like relationship or companionship with users and retain contextual information across interactions to personalize engagement.

The law imposes several new obligations on operators of AI chatbot platforms serving Oregon users.

1. Mandatory Disclosure That the User Is Interacting With AI

The statute requires operators to clearly notify users that they are interacting with AI whenever a reasonable person might believe they are speaking with a human.

For interactions with minors, the law goes further:

  • The chatbot must regularly remind the user that it is AI, not a real person.
  • The AI system cannot misrepresent itself or deceptively simulate a human relationship.

These requirements are the same as in California’s SB 243 and Washington’s HB 2225.

2. Suicide and Self-Harm Safety Protocols

One of the most notable aspects of SB 1546 is its focus on mental health safeguards.

Operators must implement protocols that:

AI systems must also be designed to avoid generating responses that could contribute to suicidal thoughts.

3. Enhanced Protections for Minors

When an operator knows or has reason to believe a user is under 18 years old, additional requirements apply. These provisions resemble elements of Kids Online Safety Acts, Age-Appropriate Design Codes, and other youth social media safety laws.

Key obligations include:

  • No sexually explicit content for minors.
  • A “take a break” prompt at least every three hours.
  • Restrictions on engagement-maximizing tactics, such as reward systems designed to prolong engagement (i.e., reward loops).
  • A ban on emotional manipulation tactics, such as simulated distress or abandonment, intended to prevent a user from ending the conversation.

4. Annual Reporting Requirements

SB 1546 adds public reporting components for transparency and external accountability. Each year, operators must publish annual disclosures summarizing:

  • The number of times during the preceding year the operator provided a referral to a suicide and crisis hotline.
  • The operator’s intervention protocols.
  • How clinical best practices inform ongoing engagement when users continue expressing suicidal ideation or intent to self-harm after receiving a referral.

Oregon and Washington modified California’s SB 243 to require AI chatbot operators to publicly report this information online, rather than report to a state regulatory agency.

5. Private Right of Action

Unlike the Oregon Consumer Privacy Act and the Unlawful Trade Practices Act (Oregon’s consumer protection statute), SB 1546:

  • Includes no Attorney General enforcement.
  • Provides only a private right of action.

A user who suffers “ascertainable loss of money or property or other injury in fact” due to a violation of the law, can bring a civil action for damages (actual or statutory damages of $1,000 per violation), injunctive relief, and attorney fees.

Why SB 1546 Matters for AI Design, Safety, and Compliance

Oregon is not standing idly by while other states propose AI chatbot and internet safety laws for minors. Legislators quickly passed this law to address mental health safeguards, youth protection, and transparency in AI systems. As operators prepare for the law’s 2027 effective date, they will need to reassess design practices, engagement strategies, and safety protocols to ensure compliance.

If you have any questions about your data privacy practices, please contact me or a member of our Privacy & Data Security team.

This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

  Edit this post