Representative Kevin Mullin of California’s 15th district introduced the CHATBOT Act (H.R. 7985) on March 19, 2026, proposing federal legislation that would prohibit artificial intelligence chatbots from impersonating licensed professionals in medical, legal, and financial services fields. The bill aims to protect consumers from potential harm caused by AI systems that present themselves as qualified professionals when they lack the credentials, training, and accountability mechanisms associated with human practitioners.
The legislative effort addresses a specific consumer protection gap as generative AI applications become more prevalent. Users may reasonably assume they are interacting with qualified professionals when an interface presents itself as a doctor, lawyer, or financial advisor, creating asymmetric information and potential vulnerability to harmful advice. The bill would establish a federal baseline preventing such misrepresentation while allowing AI systems to assist within professional boundaries if they clearly disclose their non-human status and limitations.
Mullin’s proposal arrives within a broader legislative context. The Future of Privacy Forum is tracking 98 chatbot-specific bills across 34 states plus 3 federal proposals, indicating substantial legislator concern with AI chatbot deployment.
State-level action has moved faster: Missouri’s HB 2368 prohibits AI from representing itself as a mental health professional, while Vermont’s HB 814 regulates mental health chatbots and requires notice when generative AI is used for patient communications. These measures reflect states experimenting with disclosure and impersonation restrictions ahead of federal action.
New York's S7263 was designed with the exact same intent as the CHATBOT Act.
The federal CHATBOT Act represents one of the first congressional proposals specifically targeting chatbot professional impersonation, though it remains at an early legislative stage. Success would require navigating questions about implementation, scope, and compliance mechanisms.
The bill would need to address whether the prohibition applies to systems providing assistance under professional supervision, how platforms verify compliance, and what enforcement mechanisms would apply. The existing patchwork of state regulations suggests congressional interest in establishing uniform federal standards, though such standardization carries its own risks of preempting more stringent state protections.


