Skip to content
Carlos KiK
Go back

Yann LeCun Raised $1 Billion to Prove Every AI Company Is Wrong

Yann LeCun left Meta, co-founded a company, and raised $1.03 billion in a single seed round at a $3.5 billion pre-money valuation. That is Europe’s largest seed round ever. And the thesis behind it is not “we will build a better LLM”. The thesis is that LLMs are fundamentally wrong.

Let that sink in. Everyone in AI is building on top of large language models. OpenAI, Anthropic, Google, Meta’s own Llama team. Trillions of dollars of infrastructure, research, and product development are stacked on the assumption that predicting the next token is the path to intelligence. LeCun looked at all of that and said: no. Wrong approach entirely.

That is either the most important insight in AI right now or the most expensive mistake anyone has ever made in public.

The argument against LLMs

LeCun has been making this argument for years, loudly and consistently. His position: LLMs operate purely in language space. They predict text, not reality. They have no internal model of how the world works. They cannot reason about physics, causality, or spatial relationships in any meaningful way. They approximate these things through statistical patterns in text, and that approximation hits a ceiling.

His alternative is JEPA, Joint Embedding Predictive Architecture. Instead of predicting the next word, JEPA predicts the next state of the world in an abstract representation space. Think of it as the difference between reading about how a ball bounces versus actually understanding the physics of bouncing. LLMs do the first thing. JEPA attempts the second.

The targets are healthcare, robotics, and industrial applications. Domains where understanding the physical world is not optional. You cannot hallucinate your way through controlling a robotic arm or diagnosing a medical condition from imaging data. These are domains where the limitations of language models are most exposed.

The money and the people

The investor list reads like a who’s who of people who have been right before. Jeff Bezos. Eric Schmidt. Tim Berners-Lee. Samsung. Salesforce. These are not people who write billion-dollar checks on a whim.

The team is equally serious. CEO is Alex LeBrun, who founded Nabla and previously built AI products at Meta. COO is Laurent Solly, who spent years as Meta’s VP for Southern Europe and then all of Europe. The company is expanding across Paris, New York, Montreal, and Singapore. This is not a research lab with a blog. This is a company being built to compete at scale from day one.

And it is worth noting: LeCun did not just leave Meta. He left Meta to bet against the approach that Meta itself is pursuing with Llama. That takes either enormous conviction or enormous ego. Possibly both.

Why I think this matters regardless of outcome

Here is my honest take. I do not know if LeCun is right. Nobody does. The LLM paradigm has produced genuinely impressive results, and the scaling laws suggest there is still headroom. But the limitations are real too. Anyone who uses these tools daily, and I use them every single day, knows the failure modes. The confident hallucinations, the inability to truly reason through novel problems, the way they break down the moment you push past their training distribution.

The question is whether those are engineering problems that get solved with more scale and better training, or whether they are architectural limitations baked into the approach itself. LeCun is betting his reputation and a billion dollars on the latter.

What I find most interesting is the target domains. If AMI Labs was building another chatbot or coding assistant, I would be skeptical. Those are solved-enough problems where LLMs work well enough. But healthcare imaging, robotics, industrial automation: these are domains where “works well enough” is not acceptable and where the physical-world understanding gap in LLMs is a genuine blocker.

The risk nobody wants to talk about

The elephant in the room is that JEPA is still largely theoretical at production scale. The academic results are promising but they are academic results. Bridging the gap between “interesting paper” and “product that works reliably in a hospital” is measured in years and billions, and plenty of promising approaches have died in that gap.

LeCun also has a history of being right about the direction but wrong about the timeline. He championed convolutional neural networks decades before they became dominant. He was right, eventually. But “eventually” in research can mean 10-20 years, and investors with a billion dollars deployed tend to want returns faster than that.

The other risk is talent. The entire AI talent pool has been trained on transformer architectures and language models. Recruiting engineers to work on a fundamentally different paradigm means either retraining people or competing for a very small pool of researchers who already think differently. That is hard even with unlimited money.

What this means for the industry

If AMI Labs produces results, if JEPA world models can do things that LLMs fundamentally cannot, the entire AI industry shifts. Not overnight, but the narrative changes. The investment thesis changes. The assumption that scaling language models is sufficient gets challenged with real evidence, not just theoretical arguments.

If AMI Labs fails, LeCun becomes the cautionary tale of the brilliant researcher who was too attached to his own theory. A billion dollars is a lot of money to spend proving a negative.

Either way, this is the most contrarian bet in AI right now. Everyone else is running in the same direction. LeCun is running the opposite way, with a billion dollars and some of the smartest backers in tech behind him. I respect the conviction even if I am not sure about the outcome.

The next two to three years will tell us whether LLMs are the foundation of AI or just the first chapter. LeCun is betting everything on the second option. That alone makes AMI Labs the most interesting company in AI to watch right now.


Sources: TechCrunch, MIT Technology Review


Share this post on:

Previous Post
Google Just Made Its AI Coding Assistant Free. Cursor and Copilot Should Be Nervous.
Next Post
I Pay $110 a Month for Claude. I Now Get One-Fifth of What I Used to.