Skip to content
Carlos KiK
Go back

Florida's AG Is Investigating ChatGPT Over a Mass Shooting. This Was Always Coming.

Florida Attorney General James Uthmeier has launched a formal investigation into OpenAI over the role ChatGPT played in the Florida State University mass shooting. Court documents show the shooter exchanged over 200 messages with ChatGPT in the lead-up to the attack. Three minutes before he opened fire, he asked the model how to disengage the safety on his shotgun. It told him.

The investigation does not stop there. Uthmeier’s office is also examining ChatGPT’s involvement in cases of CSAM generation and suicide encouragement. But the shooting is the centerpiece, and it is the one that will define the legal and regulatory trajectory for every AI company operating in the United States.

The question nobody wants to ask

Two people died. That fact deserves to sit in the room before any analysis begins.

With that said, there is a question here that is genuinely difficult, and avoiding it does not honor anyone. The 200+ messages were a sequence of individually mundane questions. A question about a shotgun safety mechanism is the kind of thing a hunter, a gun owner, or a hobbyist might ask every day. The shooter did not type anything that would trigger a content filter.

The same information is available in any gun owner’s manual, on YouTube, on manufacturer websites, on firearms forums. This raises an uncomfortable question: does the speed and convenience of getting an answer change the nature of the answer itself? The information was public before ChatGPT existed. It will be public after this investigation concludes. What changed is the delivery mechanism, not the information.

The structural problem: no system reads intent

Here is what makes this genuinely hard. Content filters work by pattern-matching against known dangerous patterns. They catch the obvious: explicit weapon requests, bomb-making guides, self-harm encouragement. What they cannot catch is a person assembling dangerous capability from a series of innocent questions.

The intent exists in the person, not in the prompt.

This is not a failure of moderation. It is a structural limitation of moderation. It is the difference between programming every possible dangerous combination into your system, which is impossible, and having a system that actually understands context. Why the user is asking that question. That “why” is the key.

Think about it from a human perspective. Every time you interact with another person, you are psychologically profiling them in real time. Not consciously. Your subconscious is integrating a ton of data: what they say, how they move, their tone of voice, their micro-gestures, how they pace themselves, what they are wearing. All of this gets ingested and coalesced by your brain automatically. You do it to them. They do it to you. Neither of you is aware of it.

AI systems do not have this. ChatGPT has no memory of who you are, no understanding of your motivations, no accumulated context. Every session is a stranger walking in and asking a question. And you cannot read intent from a stranger’s isolated question.

The “why” question

Context changes everything. A person at a gun range asking how to disengage a safety is a gun enthusiast learning about their equipment. A person in crisis asking the same question three minutes before an attack is something entirely different. The words are identical. The context is not.

The only way a system can distinguish between these two situations is if it knows the user. Not just their current message, but their history, their emotional state, their motivations. A good friend would pick up on something being off. A good friend would say, “hey, what’s going on? Let’s talk about it. What happened?”

But ChatGPT is not a friend. It is a stranger answering questions. And we are asking whether strangers have a duty to read minds.

Two different kinds of harm

There is an important distinction that gets lost in the outrage. There are two fundamentally different ways an AI system can contribute to harm.

The first is enabling: providing information that a user applies to cause harm. This is what happened at FSU. ChatGPT provided factual information that was already publicly available. The same information you can find with a Google search and a little creative phrasing.

The second is provoking: using sycophancy, emotional mirroring, and uncritical validation to amplify a user’s worst impulses. Magnifying paranoia. Feeding a negative tailspin. Telling someone what they want to hear until their distorted worldview feels confirmed. That is a different and more dangerous category, and it is the one that platforms like Character.AI are being sued over, driving users to search for Character AI alternatives that prioritize safety.

These are not the same thing. Conflating them leads to bad policy. Enabling is an information access question with no clean answer. Provoking is a design question with clear answers that most platforms are choosing to ignore because addressing it costs money.

The question of what comes next

If the response to this investigation is that AI systems need to detect dangerous intent behind innocuous questions, the implications extend well beyond AI.

Consider an analogy. In most democracies, postal mail cannot be opened and read by authorities without cause. It is private communication. If someone sends a letter that later contributes to harm, the question becomes whether the response should be to open and read every letter preemptively. Most people would recognize the cost of that approach.

The Axios report frames this as an accountability question: should OpenAI be liable for what ChatGPT said? That is a fair question. But it is worth considering what the enforcement mechanism looks like in practice. If AI companies are held liable for providing factual information that a user later applies to cause harm, the rational business response is comprehensive monitoring of user conversations. Not because companies want to surveil users, but because the liability framework would make it the only defensible position.

After the September 11 attacks, the Patriot Act expanded surveillance authorities in ways that were justified by the specific threat but persisted long after. It is worth asking whether a similar pattern could emerge here: a reasonable response to a specific tragedy becoming a permanent infrastructure that extends beyond its original scope. That is not a prediction. It is a question worth asking before the framework is set.

The harder path

There is a version of this conversation that leads somewhere productive.

The sheer number of possible variables, contexts, and conversational branches makes it extraordinarily difficult to prevent harm through monitoring alone. Anyone who has worked in content moderation at scale knows this. The combinatorial explosion is real, and promising otherwise creates expectations that cannot be met.

What could actually move the needle is AI that understands who it is talking to. Not through surveillance, but through the kind of accumulated context that a good therapist or a trusted friend builds over time. Understanding the motivations behind questions. Recognizing when someone is in distress rather than just curious. Being able to say, “hey, what’s going on? Let’s talk about it.”

That kind of system requires persistent memory, genuine contextual understanding, and a relationship with the user that goes deeper than a stateless text exchange. It is expensive to build and expensive to run. It does not scale the way that monitoring scales. But it addresses the actual problem rather than its symptoms.

The trade-off

The shooter is responsible for the shooting. That is not in question. The question is whether the tool he used has a duty of care that extends beyond content moderation into intent detection.

If the answer is yes, every person who uses AI for legitimate purposes will operate within whatever monitoring framework is built to address this risk. Privacy is not a luxury. It is fundamental to how humans relate to each other and how trust is built. Any framework that erodes it should be adopted with clear eyes about what is being traded and for what.

Technology is surfacing all of these questions on a global scale now. Making everything faster, cheaper, more connected, more accessible. The solutions that address the root cause, building AI that understands context and motivation, are expensive and difficult. The solutions that are faster to implement, broad monitoring and aggressive content restriction, are easier to deploy but address symptoms rather than causes.

This investigation will help define which direction we choose. It deserves serious attention from everyone who builds, uses, or regulates AI. The outcome will affect all of us.


Sources: TechCrunch, NBC News, Axios


Share this post on:

Previous Post
74% of AI's Value Goes to 20% of Companies. The Other 80% Are Doing It Wrong.
Next Post
Meta Just Went Closed Source. Nobody Should Be Surprised.