← Back
US / Legislation

NY Bill Would Regulate AI Legal and Medical Advice

S7263, now approaching a full Senate vote, would create a private right of action when chatbots cross into the territory of licensed professions.

NY Bill Would Regulate AI Legal and Medical Advice

By Negotiate the Future

3/6/26

A New York measure that would impose civil liability on chatbot operators whose systems dispense legal, medical, or other professional guidance has reached the Senate floor.

Senate Bill 7263 cleared the Internet and Technology Committee on a 6-0 vote. Sen. Kristen Gonzalez, who chairs the committee and sponsored the legislation, was joined by co-sponsors Sens. Michelle Hinchey, John C. Liu, and Julia Salazar. The bill advanced to third reading this week; an Assembly companion, A6545, remains in the Consumer Affairs Committee.

The bill would add section 390-f to the General Business Law. Its prohibition is broad. Chatbot operators could not allow their systems to provide any “substantive response, information, or advice” that would constitute a crime under existing licensing statutes if delivered by an unlicensed human.

Fourteen professions are covered: law, medicine, dentistry, pharmacy, nursing, engineering, architecture, veterinary medicine, physical therapy, optometry, podiatry, psychology, social work, and mental health counseling.

Liability falls on deployers, not model developers. A hospital that builds a patient-facing triage tool on a licensed large language model carries the legal risk; the company that built the underlying model is explicitly exempted.

The distinction shapes who gets sued. OpenAI would be liable for ChatGPT, a product it operates directly. It would not be liable for a third party’s diagnostic tool running on its API.

Users who suffer harm may bring a civil action for actual damages. Where a court finds willful violation, the deployer also pays the plaintiff’s attorney’s fees, costs, and disbursements. Fee-shifting changes the economics of litigation. It makes lower-value cases viable, because the plaintiff’s lawyer can be compensated by the defendant.

The bill requires operators to disclose, clearly and conspicuously, that users are interacting with an AI system. The notice must appear in the user’s language and in a readable font size. Disclosure, however, does not shield operators from liability; the bill states this explicitly.

The bill’s operative phrase is not defined.

“Substantive response, information, or advice” appears nowhere in the Education Law or Judiciary Law provisions the bill references. The line between general legal information, which anyone may share, and legal advice, which requires a license, has occupied courts and bar associations for decades. The bill does not resolve that distinction or acknowledge it.

New York already bars unlicensed humans from practicing regulated professions. No existing statute extends that prohibition to AI systems. Gonzalez has framed the bill as closing that gap. The New York State Bar Association’s 2024 Task Force report on AI addressed attorneys’ ethical obligations when using generative tools but stopped short of treating AI-generated output as unauthorized practice.

Taylor Barkley of the Abundance Institute, writing in Reason, called the bill’s approach protectionist. The populations most likely to benefit from AI-assisted professional guidance, Barkley noted, are those who cannot afford licensed professionals. For a tenant contesting an eviction or a parent navigating a custody filing, the practical choice is not between AI advice and a lawyer but between AI advice and nothing.

No public record indicates which AI companies have lobbied for or against the measure, nor has any confirmed that legal aid or public interest law organizations were consulted during drafting.

If the Senate passes S7263, the bill must still clear the Assembly and reach the Governor’s desk. Should it become law, deployers would have 90 days before enforcement begins.

More from NtF

Continue reading

Stay Informed

Stay Informed