Oregon Enacts Chatbot Safety Law With Private Right of Action

SB 1546 requires disclosure, self-harm detection, and minor-facing reminders. Unlike peer statutes, it lets users sue for statutory damages.

Oregon Enacts Chatbot Safety Law With Private Right of Action

By Negotiate the Future

5 hr ago

Gov. Tina Kotek signed Senate Bill 1546 on March 31, making Oregon the first state this year to enact a chatbot safety statute with a private right of action. The law takes effect January 1, 2027. It joins a small group of state measures targeting "AI companion" systems, but its enforcement design is different.

SB 1546 applies to operators of artificial intelligence companions, defined in the statute as systems that simulate sustained relationships with users and retain context across sessions to personalize engagement. Covered operators must tell users upfront that the service may not be appropriate for children, remind minors during conversations that they are not interacting with a human, and implement protocols to detect expressions of suicidal ideation or self-harm. When those expressions surface, the chatbot must interrupt the conversation and refer the user to crisis resources, including the 988 Suicide & Crisis Lifeline.

Enforcement is where Oregon diverges from its peers.

Users who have suffered ascertainable harm can sue operators directly for statutory damages of $1,000 per violation, plus injunctive relief. California's SB 243, signed by Gov. Gavin Newsom in October 2025, took a similar stance on safety obligations but left most enforcement to the state. Washington and Utah are weighing comparable bills. Neither has moved a private right of action as prominent as Oregon's, which means an Oregon user who speaks with a chatbot in Portland does not have to wait for the attorney general to decide whether an enforcement action is worth bringing.

The Transparency Coalition, which tracks state AI legislation, said SB 1546 was the first major chatbot measure of 2026. Supporters of the bill pointed to incidents in which companion-AI systems failed to respond appropriately to users in distress, including minors. Industry groups said the private-action provision will draw opportunistic litigation, and pushed for attorney-general-only enforcement and a longer compliance window.

Key definitions will govern the law's reach. A chatbot that offers only transactional help or single-session queries would fall outside the "AI companion" definition, because the statute targets systems built on sustained, personalized engagement. The operators most clearly in scope are companion-specific apps and general-purpose assistants that maintain memory across sessions with individual users.

Compliance work will turn on three questions. What counts as a sustained relationship. What detection performance is sufficient to meet the statute's standard for identifying self-harm language. Whether existing consumer-facing safety stacks, including age gating, content filters, and crisis-referral prompts, already satisfy Oregon's specific procedural requirements, or need to be rebuilt to the law's language.

Oregon's legislation also slots into a broader pattern. State lawmakers have moved faster than Congress on consumer-facing AI obligations, often building on existing privacy or consumer-protection frameworks. The White House's recent push for broad preemption of state AI laws would, if enacted, test whether statutes like SB 1546 remain operative. The bill's January 2027 effective date gives Congress roughly nine months to act before the law becomes enforceable.

The design choice at Oregon's center — a private right of action paired with statutory damages — is the feature other legislatures are most likely to notice. It lets enforcement scale beyond one state's resources and creates a real cost for operators that neglect their obligations, without waiting for agency action. Industry opposes the provision for that reason. Supporters defend it for the same reason.

More from NtF

Continue reading

Stay Informed

Stay Informed