Skip to content
Carlos KiK
Go back

David Silver Raised $1.1 Billion to Teach AI Without Human Homework

There is a particular kind of AI news that makes the industry stop refreshing benchmark charts for five minutes and remember there are other ways to build intelligence.

David Silver just raised $1.1 billion for one of those ways.

Silver is not a random founder with a deck and a nice logo. He helped build the DeepMind systems that beat humans at Go and chess by learning from experience instead of memorizing human games. AlphaZero was not “autocomplete but for board games”. It was a different lesson entirely: give the system rules, self-play, feedback, and enough search, and it can discover strategies humans never wrote down.

Now Silver has left DeepMind and founded Ineffable Intelligence in London.

The pitch: build a “superlearner” that discovers knowledge and skills without relying on human-generated data.

The seed round: $1.1 billion.

Seed. As in, the seed has apparently been watered with sovereign wealth and rocket fuel.

The anti-scrape thesis

Most of modern AI was built by eating the internet.

Text. Code. Images. Videos. Books. Forums. Documentation. Scientific papers. All the weird little traces humanity left behind while arguing, building, explaining, and posting screenshots of errors at 3 a.m.

That worked. Obviously. Nobody serious can deny the results.

But the limitation is also obvious: if your system learns primarily from human output, it inherits the ceiling, noise, bias, and shape of that output. It can remix brilliantly, but the question remains: can it discover genuinely new strategies, concepts, and laws without waiting for humans to write examples first?

Reinforcement learning attacks that question from another angle.

The system acts. The world pushes back. It updates. Repeat until something interesting happens, or until your compute bill starts smoking.

Why this is different from another LLM startup

Ineffable is not alone. Yann LeCun’s AMI Labs is betting on world models. Other DeepMind alumni are spinning out reinforcement-learning-heavy labs. The pattern is becoming too obvious to ignore.

The people who spent years closest to non-language forms of machine intelligence are not all rushing to build better chatbots.

That does not mean LLMs are doomed. Please, no religious wars. LLMs are incredibly useful and still improving. But the smartest money is starting to hedge against the idea that next-token prediction alone becomes everything.

That is the real signal in Silver’s round.

Not “LLMs are dead”.

“LLMs are not the whole map.”

The London angle matters too

TechCrunch notes that Ineffable joins a wave of U.K. AI labs founded by DeepMind alumni. That is not random. DeepMind gave London something Silicon Valley understands very well: alumni gravity.

One great lab produces people. Those people produce companies. Those companies attract capital. Capital attracts more people. Suddenly a city has a cluster.

This is how ecosystems compound.

My read

The odds of a $1.1 billion seed company producing a clean scientific revolution are impossible to handicap from the outside. Maybe Ineffable builds something historic. Maybe it burns heroic amounts of money discovering that reality is rude.

Both are possible.

But I like the bet because it is aimed at the right wall.

If AI is going to move past imitation and into discovery, it needs systems that can learn from interaction, experiment, failure, and feedback. It needs machines that do not just read what humans already wrote, but go poke the universe and come back changed.

That is not a chatbot feature.

That is a different category entirely.

Sources: TechCrunch, The Times


Share this post on:

Previous Post
Anthropic Is Becoming a Financial Instrument With a Model Attached
Next Post
The AI Chip Story Just Moved From GPUs to CPUs