Skip to content
Carlos KiK
Go back

Sam Altman Stopped Overseeing AI Safety. He Is Building Datacenters Instead.

Sam Altman told OpenAI staff he is handing safety oversight to CRO Mark Chen and security to Greg Brockman. His new focus: fundraising, supply chains, and building out $1.4 trillion in datacenter commitments over the next eight years.

And here is the thing that nobody wants to admit: he is kinda right.

Why he is right (and why that is scary)

Right now, we are still using brute force. If you want AI to be ubiquitous, you need an obscene amount of hardware. If you want to improve it, you need more hardware. There is no shortcut. Until someone discovers something that is an order of magnitude better than what we have, the only option is raw compute.

The LLM architecture, at its core, is a predictor engine for the next word, taking into account all the information it has been trained on. It is a lot more complex than that, but the basic principle is this. And to make that basic principle work at the level people now expect, you need factories full of GPUs running around the clock.

Datacenters are the AI factory. They are where the intelligence is being built, or at least where the approximation of intelligence is being assembled through brute force. Altman knows this. The infrastructure is the bottleneck, and if you do not solve the bottleneck, nothing else matters.

So he is focusing on the bottleneck. That is rational leadership.

The fake-it-till-you-make-it phase

Let me be honest about where we actually are: we are in the fake-it-till-you-make-it phase.

Current AI is not intelligent in any meaningful sense. It is useful, extraordinarily useful, but what looks like intelligence is a predictor engine running on enough data and compute to create a convincing illusion. The discoveries AI systems are making, they are not coming from vision or understanding. They are coming from trying a billion combinations until something sticks, then following the thread.

That is not nothing. A system that can explore a billion combinations without getting tired, without having a bad day, without needing sleep or motivation, that produces real results. But it is not intelligence. It is industrialized trial and error.

The hope, and this is what the entire industry is betting on, is that you build the tools that build the tools that build the tools that eventually build real intelligence. A self-evolving cycle where each generation of AI helps create the next, slightly better generation. Each iteration finds optimizations that humans would take years to discover.

Just a few days ago, Google published TurboQuant, a software improvement that gives 6x memory compression with zero accuracy loss. That is an enormous gain from a purely algorithmic insight, no new hardware needed. There is still so much performance left on the table because we are humans and it takes us time to find novel approaches. But what if you could automate the search for those approaches? What if instead of walking a thousand roads sequentially, you could try all of them simultaneously and collapse the timeline?

That is the dream. And it requires compute. Mountains of compute.

The numbers that should make you nervous

$1.4 trillion in commitments. Those are words on paper until someone is knee-deep in concrete and power purchase agreements. Altman knows that once you start building, you cannot stop. You have to see it through. That is why he is personally driving the infrastructure, because you can say you will commit a trillion dollars, but words are cheap. Being physically present, negotiating deals, managing supply chains, that makes it real and irreversible.

The obscene part is the scale. If something goes wrong, if AI does not deliver what everyone has bet on, the avalanche effect on the global economy will be beyond scary. AI is the current push for the next phase of human evolution. Everybody has gone all in. Every major government, every major corporation, every major investor.

It had better deliver. Because if it does not, the few grifters at the top will make their money and disappear, and the rest of the world will be in collective pain. Nothing is free. Everything is interconnected. The bet has been placed and it cannot be taken back.

What Altman actually cares about

Let me state what I see without judgment: Altman cares about OpenAI winning the race. He cares about making it happen. Safety, in his calculation, is a constraint to manage, not a mission to pursue. He has delegated it because it is not where the existential risk to his company lies. The existential risk to OpenAI is not that their AI is unsafe. It is that someone else builds the infrastructure first.

I am not against this and I am not for it. I am stating what I observe.

The 20-watt problem

Here is what I keep circling back to. The human brain runs on roughly 20 watts. It produces genuine intelligence, creativity, intuition, and consciousness on the energy budget of a dim light bulb.

The AI industry is building gigawatt datacenters, consuming the electricity of small countries, to produce a convincing approximation of what 20 watts accomplishes naturally.

That gap, between 20 watts of real intelligence and gigawatts of simulated intelligence, is the measure of how far we still have to go. And until that gap closes, we need the datacenters. We need the brute force. We need someone like Altman obsessing over concrete and power lines.

The show must go on. It started on some questionable foundations, took off in a direction nobody predicted, and has now become something critical because it genuinely empowers people to do things that were impossible before. You cannot stop it. You can only hope that the people building the infrastructure are building it for the right reasons.

And hope, as an engineering strategy, has a poor track record.


Sources: Techmeme, Investing.com


Share this post on:

Next Post
Jack Dorsey Cut 40% of His Workforce. His Reason: AI Can Do Their Jobs.