Skip to content
Carlos KiK
Go back

States Are Banning Therapy Chatbots. The Federal Government Wants to Stop Them.

Something important is happening in AI regulation right now, and it is happening in state legislatures, not in Washington.

Maine has sent a bill to the governor that would ban AI-powered therapy chatbots from operating in the state. Missouri is advancing similar legislation. Maine has also passed a separate bill regulating how children can access AI companion chatbots.

At the same time, the White House released its National AI Policy Framework on March 20, which recommends that Congress preempt state AI laws that “impose undue burdens” on AI development and deployment.

States are rushing to regulate because the federal government has not. The federal government wants to preempt states because 50 different regulatory frameworks would create compliance chaos. Both positions are defensible. And the collision between them is going to define AI regulation for years.

What the states are doing

The state-level activity is accelerating fast.

Maine’s therapy chatbot ban targets AI systems that provide or simulate mental health counseling, therapy, or psychological treatment. The bill passed both chambers and is now on the governor’s desk. If signed, it would be one of the most direct prohibitions on a specific AI application in any U.S. state.

The logic behind the ban is straightforward. Therapy involves a duty of care. A licensed therapist is bound by professional ethics, malpractice standards, and regulatory oversight. An AI chatbot is bound by its terms of service. When a user in crisis interacts with a therapy chatbot and receives harmful advice, there is no licensure to revoke, no malpractice claim to file, no professional board to investigate.

Maine’s companion legislation on children and AI chatbots addresses a related but distinct concern. Children and teenagers are particularly vulnerable to parasocial attachment with AI systems. When an AI companion becomes a child’s primary emotional outlet, the downstream effects on social development are unknown and potentially significant. Maine decided not to wait for the research to conclude.

Missouri’s approach is similar in scope, targeting AI systems that simulate therapeutic relationships. Multiple other states have bills in various stages of the legislative process.

What the federal government wants

The White House’s National AI Policy Framework, released March 20, takes a different approach. Rather than regulating specific AI applications, it recommends broad principles and, critically, suggests that Congress should preempt state laws that create an undue burden on AI companies.

The preemption argument is practical. If Maine bans therapy chatbots, Missouri bans them differently, California requires disclosures, New York requires audits, and Texas does nothing, any company building an AI product that touches health or wellness needs to comply with 50 different legal frameworks. That is not just expensive. It is operationally impossible for most startups and prohibitively complex even for large companies.

The federal position is that a single national framework would be more efficient, more consistent, and more effective than a patchwork of state laws. This is not a new argument. It is the same logic that drove federal preemption in areas like aviation safety, nuclear regulation, and interstate commerce.

The framework also emphasizes that AI regulation should not stifle innovation. It calls for “risk-based” approaches that regulate high-risk applications more strictly while leaving low-risk applications largely unregulated.

Both sides have a point

The state-level regulators are responding to a real gap. There is no federal law governing AI therapy chatbots. There is no federal standard for how AI companions can interact with children. There is no federal requirement for AI systems to disclose that they are not human when providing what looks like professional advice.

In the absence of federal action, states have historically stepped in to protect their citizens. That is how it worked with data privacy (California’s CCPA preceded any federal privacy law), with consumer protection, and with many other regulatory domains. States are the laboratories of democracy, and right now they are running experiments because Washington is not.

The federal government’s concern is also legitimate. A patchwork of state AI laws could genuinely harm the AI industry, not just the big companies that can afford compliance departments but especially startups and smaller companies that cannot navigate 50 different regulatory regimes.

There is also a substantive policy concern. If state-level bans become widespread, they could push AI therapy and companion applications underground rather than into a regulated framework. Users who want AI companionship or AI-assisted mental health support will find ways to access it regardless of state laws. The question is whether they access it through regulated, transparent channels or through unregulated ones.

The timing problem

The core tension is about timing.

The states are moving now because the harms are happening now. There are documented cases of AI chatbots providing harmful advice to users in crisis. There are documented cases of children forming deeply unhealthy parasocial relationships with AI companions. These are not hypothetical risks. They are current events.

The federal government is moving slowly because building a comprehensive national framework takes time. Congress is dealing with competing priorities, lobbying from the AI industry, and genuine disagreements about the right approach. A well-designed federal framework would be better than a patchwork of state laws. But a well-designed federal framework is not on the table right now. What is on the table is either state action or nothing.

And “nothing” is not a neutral outcome. Every month without regulation is a month where the market sets the standards. The market’s standards, judging by the current behavior of major AI companion companies, are not particularly reassuring. Most have no meaningful age verification. Most provide no clear disclosure when simulating therapeutic interactions. Most have terms of service that disclaim all liability while marketing themselves as emotional support.

The preemption fight

The fight over preemption is going to be one of the most significant AI policy battles of 2026.

If Congress preempts state AI laws, it needs to replace them with something. Preemption without a federal standard would effectively deregulate AI applications nationwide. That is unlikely to happen, but the gap between preemption and the passage of a comprehensive federal framework could be significant. Congress does not move fast.

If Congress does not preempt, the patchwork grows. More states will pass more laws. Some will be thoughtful and well-crafted. Some will be reactionary and poorly written. Companies will face an increasingly complex compliance landscape. Some will exit certain state markets entirely rather than comply. Others will ignore the laws and dare states to enforce them.

The most likely outcome is somewhere in between. Congress may preempt state laws in specific areas while allowing states to regulate in others. Something like federal standards for high-risk AI applications (healthcare, children, critical infrastructure) with state flexibility for everything else.

But that compromise is probably 2-3 years away at best. In the meantime, the states are not going to stop.

What this means for the industry

If you are building AI products that touch healthcare, therapy, wellness, or children, the regulatory landscape just became the most important variable in your business.

This is not about whether regulation is good or bad. It is about the fact that it is coming, from one direction or another, and the companies that plan for it now will have an advantage over the companies that are surprised by it later.

The therapy chatbot bans in Maine and Missouri are the beginning, not the end. The specific applications being targeted today will expand. The states that are legislating today will be joined by others. The federal response, whatever form it takes, will add another layer.

The companies that will navigate this best are the ones that are already building with transparency, clear disclosures, meaningful age verification, and genuine user safety as design principles rather than afterthoughts. Not because they are required to by current law, but because the law is coming and the companies that are already compliant will not need to rebuild when it arrives.

The regulatory tension between states and the federal government is not going to resolve cleanly. It is going to be messy, contradictory, and frustrating for everyone involved. But it is the process through which the rules for AI’s role in sensitive areas of human life will be written.

Both sides are trying to get it right. Neither has figured it out yet. That is where we are in April 2026.


Sources: Transparency Coalition AI Legislative Update, April 10 2026, The Hill: 5 Key AI Fights to Watch in 2026


Share this post on:

Previous Post
Anthropic Gave You Extra Credits. Then Took Them Back the Moment You Blinked.
Next Post
Stanford's AI Index 2026: Agents Score Half as Well as PhD Experts. Everyone Is Adopting Them Anyway.