OpenAI, Anthropic, and Google are in an all-out war with each other. They are competing for talent, compute, customers, and the future of AI. They would happily watch each other fail.
And yet this week, all three agreed to cooperate on a single issue: Chinese firms are systematically stealing their models.
When companies that want each other dead decide to share intelligence, you know the threat is real.
The numbers are staggering
Anthropic published the most detailed account. They documented 16 million unauthorized exchanges from Chinese AI firms, routed through approximately 24,000 fraudulent accounts. The companies named: DeepSeek, Moonshot, and MiniMax.
Sixteen million exchanges. Not a few researchers poking around the API. Not a handful of curious developers testing the waters. This is industrial-scale extraction. Systematic, sustained, and deliberately designed to evade detection.
The method is model distillation. You query a frontier model millions of times with carefully crafted prompts, collect the outputs, and use those outputs to train a cheaper, smaller model that mimics the original. You are not stealing the weights. You are stealing the behavior. And from the outside, it looks like normal API usage until you zoom out and see the pattern.
Anthropic responded by blocking Chinese-controlled companies from accessing Claude entirely. OpenAI and Google are reportedly implementing similar measures. And all three are now sharing threat intelligence through the Frontier Model Forum, a consortium that until now existed mostly on paper.
Why this matters more than it looks
The surface story is intellectual property theft. Companies spent billions training models, and competitors are extracting that value for pennies on the dollar. That is a real business problem.
But the deeper story is strategic. The entire Western AI industry runs on the assumption that frontier models are worth paying for because they are hard to replicate. If Chinese firms can reliably distill those capabilities at a fraction of the cost, the economic moat around frontier AI disappears. Every billion dollars spent on training becomes a donation to anyone with enough API credits and patience.
This is not hypothetical. DeepSeek already demonstrated that you can build competitive models for a fraction of the cost. Their R1 model shocked the industry earlier this year. The question everyone was asking was how they did it so cheaply. Now we have a partial answer: they were not starting from scratch. They were starting from 16 million exchanges worth of stolen behavior.
Can you actually stop model distillation?
This is the question nobody wants to answer honestly.
Model distillation through API access is fundamentally hard to prevent. The same API that serves legitimate customers serves bad actors. Rate limiting helps but does not solve the problem, because with 24,000 accounts you can distribute the load to look like normal usage from thousands of independent users.
Watermarking model outputs is one approach, but current techniques are fragile. They can be washed out during the distillation process. Behavioral fingerprinting is another, where you detect when query patterns match known distillation techniques. But the attackers adapt. It is an arms race, and the attackers have the advantage because they only need to succeed once while defenders need to block every attempt.
The honest assessment: you cannot fully stop model distillation through technical means alone. You can make it harder, more expensive, and riskier. But as long as the models are accessible through an API, someone with sufficient motivation and resources will find a way to extract value from them.
Which means this fight will eventually move from the technical layer to the geopolitical layer. Export controls. Sanctions. Diplomatic pressure. The tools of statecraft, not software engineering.
The AI cold war is now a real thing
For the past two years, people have been using “AI cold war” as a metaphor. It is not a metaphor anymore.
You have the three most powerful AI companies in the West forming an intelligence-sharing alliance specifically to counter Chinese capabilities. You have documented evidence of state-adjacent Chinese firms conducting large-scale IP extraction. You have retaliatory access blocks. You have coordination through formal institutions.
This has all the hallmarks of a genuine technology conflict. The difference from previous tech rivalries is the speed. The semiconductor cold war played out over decades. The AI version is playing out in months.
The uncomfortable question
Here is what I keep thinking about. The West spent years building AI in the open. Open papers, open benchmarks, open APIs. The philosophy was that openness accelerates progress for everyone. And it did.
But “everyone” includes adversaries. The openness that made Western AI great also made it extractable. And now the response is to close ranks, restrict access, and share intelligence about threats.
The irony is thick. The AI industry built its identity on openness and is now discovering that openness has costs. The question going forward is whether you can maintain the benefits of open research while defending against systematic extraction. The answer is probably not, at least not fully. Something has to give.
For now, the Frontier Model Forum coordination is the right move. Sharing intelligence about attack patterns, coordinating on defensive techniques, presenting a united front. These are practical steps that make distillation harder and more detectable.
But let us be clear about what this is. It is not a solution. It is the beginning of a long, expensive, and probably permanent defensive posture. The AI cold war does not have an end date. It has an escalation curve.
And we are still in the early innings.
Sources: Bloomberg, Japan Times