The pattern
Three weeks ago, I wrote about Anthropic silently slashing Claude quotas. MAX subscribers paying $110/month reported burning through their entire allocation in under two hours on workloads that previously lasted all day. Anthropic called it “intentional adjustment” affecting “approximately 7% of users”. The community called it something else.
Now Opus 4.7 has shipped, and the pattern is repeating with a twist.
The token tax
Anthropic disclosed in their release notes that Opus 4.7 includes a new tokenizer. Their own documentation states that the same input now requires “1.0 to 1.35x more tokens”. In plain language: the same prompt that cost you 1,000 tokens yesterday costs you up to 1,350 tokens today.
Boris Powerstein, Anthropic’s head of product, posted on X that they “increased rate limits for all subscribers to make up for it”. That sounds generous until you do the math. If your input costs 35% more tokens and your rate limit increased by less than 35%, you are paying more for the same work. Whether the increase actually compensates is a question Anthropic has not answered with numbers.
The model also introduces “adaptive thinking” and a new xhigh effort level, which by design consumes more output tokens. Users are already reporting that Opus 4.7 thinks longer on tasks that Opus 4.6 handled quickly. More thinking tokens means faster quota drain. Anthropic frames this as “better reasoning”. Users experience it as “my session ended sooner”.
The credit wipe
Reports are circulating that users who accumulated bonus credits or “extra usage tokens” in their accounts saw those balances zeroed out the moment their subscription lapsed or changed. Not expired over time. Not prorated. Zeroed. Instantly.
If this is accurate, it means Anthropic is giving users credits as a retention tool and then confiscating them as a punishment for any subscription interruption. That is not a loyalty program. That is a hostage situation dressed up as a benefit.
Why this matters more than pricing
Every SaaS company adjusts pricing. That is normal. What is not normal is the pattern of doing it indirectly.
You do not raise the price. You change the tokenizer so the same work costs more tokens. You do not reduce the quota. You change the model so it consumes tokens faster through “adaptive thinking”. You do not remove credits. You let them accumulate and then wipe them on a technicality.
Each of these individually is defensible. “We improved the tokenizer.” “We added deeper reasoning.” “Credits are tied to active subscriptions.” In isolation, each is a reasonable business decision.
Together, they form a pattern that erodes trust. And trust is the only thing standing between a $110/month subscription and a user who decides to try the alternative.
The competitive context
This is happening while Anthropic is at $30B ARR and reportedly preparing for even larger fundraising. The company is not struggling financially. These are not desperate measures from a company trying to survive. These are optimization decisions from a company trying to maximize revenue per user while maintaining the appearance of generosity.
The irony is that Anthropic’s entire brand was built on being the responsible, trustworthy AI company. The one that cared about safety. The one that was transparent about capabilities and limitations. The one that treated users as partners rather than revenue units.
Every silent quota change, every indirect price increase, every credit wipe chips away at exactly the reputation that makes people choose Claude over ChatGPT.
What I am doing
I use Claude every day. It is central to how I build. That dependency gives Anthropic leverage, and they know it. The question is whether they will keep using that leverage to extract value from their most committed users, or whether they will recognize that the users paying $110/month are the ones who will either become their biggest evangelists or their most vocal critics.
Right now, I am becoming the latter.
Sources: Anthropic Opus 4.7 Release, Previous: Claude Quota Crisis