Skip to content
Carlos KiK
Go back

I Pay $110 a Month for Claude. I Now Get One-Fifth of What I Used to.

I use Claude every day. It is central to how I work. I pay $110 a month for the MAX subscription because the tool genuinely makes me more productive. Or it did.

Starting around March 23, sessions that used to last all day started draining in under two hours. The same workloads, the same prompts, the same complexity. But the quota evaporates five times faster than it did a month ago. I am paying the same price for roughly one-fifth of the capacity.

And I am not alone.

What happened

Reddit exploded. A thread titled “20x max usage gone in 19 minutes” gathered 330+ comments in 24 hours. Another, “Claude Code Limits Were Silently Reduced and It’s MUCH Worse”, hit 360+ comments in six days. MAX subscribers, people paying $100-200 a month, reported burning through their entire quota in a single hour on workloads that previously lasted all day.

Anthropic’s response on March 26: they were “intentionally adjusting 5-hour session limits to manage growing demand”, affecting “approximately 7% of users during weekday peak hours”.

The community’s response: that is not what we are experiencing. The drain affects far more than 7%, and it is happening at all hours, not just peak times. Multiple engineers and journalists noted the timing coincided with what looks like a broken prompt caching system. The “intentional” explanation appears to be cover for a technical regression.

By April 1, Anthropic finally admitted that “people are hitting usage limits in Claude Code way faster than expected” and called it “the top priority for the team”. That admission came a full week after users started flooding GitHub and Reddit.

The timeline is worse than you think

This did not start in March. It has been building since January.

January 8: users reported Opus limits silently reduced and model quality degradation. The model switched from “think then speak” to “speak without thinking”, producing verbose, less accurate output.

January 26-28: Anthropic confirmed a “massive quality regression” from a harness issue. During this window, Claude was marking edits as complete when they had not been applied.

January 27 to February 3: 19 official incidents in 14 days. A memory leak shipped to production. 1,469 GitHub issues opened.

Late March: Anthropic silently capped rule files at 20 without any announcement, migration guide, or explanation. Developers discovered the limit by hitting errors mid-session. Worse, the cap means Claude silently bypasses deny rules when given too many commands because security checks consume too many tokens. A security regression, undisclosed.

Also late March: Opus was restricted from third-party tools like Cursor and Windsurf. Developers paying MAX specifically for Opus access found it blocked in the environments they actually use.

March 31: Anthropic accidentally shipped a 59.8MB debug file in an npm package containing nearly 2,000 TypeScript files and 512,000 lines of source code. When they tried to suppress it via takedown, they accidentally targeted 8,100 GitHub repositories.

What I think about this

I am in an unusual position here. I depend on this tool. It is woven into my daily workflow. I am not writing this from the sidelines. I am writing it as someone whose productivity just dropped by 80% for the same price.

The pattern is what bothers me most: silent changes, no announcements, retroactive explanations that do not match user experience, and a week-long gap between the problem starting and the company acknowledging it exists.

If you are going to reduce capacity, say so. If you broke the caching system, say so. If you need to manage demand, raise the price or be transparent about the limits. What you do not do is silently degrade the service, let your users discover it through frustration, and then frame it as “intentional management” when the evidence suggests a technical failure.

The broader issue

This is not just an Anthropic problem. Every AI subscription product faces the same tension: the compute cost per user is higher than the subscription price. The service is subsidized. And when demand grows faster than capacity, the company has three options: raise prices, reduce capacity, or find efficiencies.

Anthropic chose to reduce capacity without telling anyone. That is the worst of the three options, because it destroys the one thing a subscription product depends on: trust.

I will keep using Claude because there is no equivalent alternative for my workflow right now. But I am using it with the clear understanding that what I am paying for today may not be what I receive tomorrow. And that understanding changes how I build my systems and how much I depend on any single tool.

The lesson, and it applies to every AI tool, not just Claude: build your workflow so that no single provider’s silent change can take you down. Because they will change things. They will not tell you. And by the time they admit it, you will have already lost a week of productivity figuring out why everything broke.


Sources: The Register, DevClass, PiunikaWeb, WebProNews


Share this post on:

Next Post
Oracle Is Laying Off 30,000 People While Building AI Datacenters as Fast as It Can.