Skip to content
Carlos KiK
Go back

ChatGPT Health: The Tool You Want, the Trade You Don't See

Let me start with what they got right.

If you have ever sat in a doctor’s office trying to understand what “LDL 145 mg/dL” means for your actual life, while the doctor is already halfway out the door to the next patient, you know the problem ChatGPT Health is solving.

Healthcare systems are not designed for comprehension. They are designed for throughput. Efficiency in saving money, not efficiency in treating people like people. Because treating people like people requires time. And time is money. And the system has decided your understanding is not worth the cost.

So OpenAI built a tool that translates. Medical jargon into human language. Lab values into context. Appointment records into summaries you can actually use.

That part? Genuinely fantastic. Everyone should have access to that.

Now let me tell you what they are not telling you.

The exchange

Before ChatGPT Health, you could already copy-paste your lab results into ChatGPT and get an explanation. Screenshot your bloodwork. Upload a PDF. It was messy, unstructured, and it worked well enough.

So why build a dedicated product? Why connect directly to hospital systems through APIs? Why build the infrastructure to ingest structured, labeled, categorized medical records at scale?

Because messy screenshots are worthless for a data model. Structured API data is gold.

When your medical records flow through a dedicated pipeline, they arrive clean. Labeled. Categorized. Ready to be cross-referenced with everything else they already know about you.

The three-dimensional you

Think about what OpenAI already has on a regular ChatGPT user: their questions, their writing patterns, their emotional tone, their interests, their struggles, the things they are curious about at 2am.

Now add health data. Lab results, medications, diagnoses: the physical state of your body.

Body and mind are connected. Cause and effect. Stress causes inflammation. Depression affects immune response. Chronic pain changes cognition. When you combine the psychological profile they already have with the physical profile you are now handing them, you do not get a flat data point. You get a three-dimensional model of a human being.

That is not science fiction. That is the logical endpoint of combining two data streams that have always been separated.

The question I keep asking

What do you do with a three-dimensional model of a person at scale? Across millions of users?

You influence them.

The same playbook as ad-tech, but with orders of magnitude more intimate data. You know their fears (health anxiety), their vulnerabilities (medication side effects they are worried about), their decision-making patterns under stress.

I want to be wrong about this. I really do. But the incentive structure points in one direction. OpenAI burns more money than it makes. They need revenue. They need data that is more valuable than conversation logs. Medical data, structured and labeled, connected to behavioral profiles, is the most valuable data a tech company can possess.

The moment of maximum vulnerability

Here is what makes this different from every other data collection play: medical data is captured at the exact moment when rational decision-making is hardest.

When something is happening to your body and you do not understand it, the stress is overwhelming. You cannot process. You cannot analyze the trade-offs. Your prefrontal cortex is running at capacity just trying to manage the fear.

In that moment, you will grab onto anything. Fire, if necessary. A tool that explains your lab results in plain language? You will not read the privacy policy. You will not think about data implications. You will click “Connect My Records” because you need to understand what is happening to you RIGHT NOW.

The system captures data at the exact moment when informed consent is most compromised. Not by design, necessarily. But the effect is the same whether it is intentional or not.

The knife

A knife is a tool. You can use it to cut bread. You can use it to hurt someone. The knife does not decide.

ChatGPT Health is a knife. The tool itself is genuinely useful. The question is not about the tool. It is about the profit pressure on the hand that holds it.

Who oversees this? Who ensures the data is used for translation and not for profiling? Who checks that the three-dimensional model of you is never sold, never cross-referenced, never used to influence your decisions?

OpenAI’s terms of service, that is who. The same terms of service that change every quarter.

What I actually think

If doctors had time to explain things like humans, OpenAI would not have a market here. This tool fills a gap that should not exist. The gap between what healthcare promises and what healthcare delivers.

That makes it valuable. And that makes it dangerous. Because the most powerful tools always emerge in the space between what people need and what systems fail to provide.

Use it. Understand your health data. Make informed decisions. But know the trade you are making. It is not free. It never is.

It is not what they are telling you. It is what they are not telling you. And what you are not seeing. That is the part worth thinking about.


Source: Fortune


Share this post on:

Previous Post
Boston Dynamics Put Atlas on a Factory Floor. China Did That Years Ago.
Next Post
Unsung Hero: The Man Who Keeps Breaking Every Antivirus on Earth