zlacker

[parent] [thread] 48 comments
1. aanet+(OP)[view] [source] 2026-01-23 00:29:38
^ THIS

I've run out of quota on my Pro plan so many times in the past 2-3 weeks. This seems to be a recent occurrence. And I'm not even that active. Just one project, execute in Plan > Develop > Test mode, just one terminal. That's it. I keep getting a quota reset every few hours.

What's happening @Anthropic ?? Anybody here who can answer??

replies(8): >>genewi+y >>fragme+J1 >>vbezhe+g8 >>bmurph+P9 >>heavys+ua >>Millio+uc >>alexk6+bf >>behnam+GB
2. genewi+y[view] [source] 2026-01-23 00:34:13
>>aanet+(OP)
sounds like the "thinking tokens" are a mechanism to extract more money from users?
replies(4): >>mystra+T2 >>vunder+g4 >>arthur+Bb >>behnam+RB
3. fragme+J1[view] [source] 2026-01-23 00:47:51
>>aanet+(OP)
How quickly do you also hit compaction when running? Also, if you open a new CC instance and run /context, what does it show for tools/memories/skills %age? And that's before we look at what you're actually doing. CC will add context to each prompt it thinks is necessary. So if you've got a few number of large files, (vs a large number of smaller files), at some level that'll contribute to the problem as well.

Quota's basically a count of tokens, so if a new CC session starts with that relatively full, that could explain what's going on. Also, what language is this project in? If it's something noisy that uses up many tokens fast, even if you're using agents to preserve the context window in the main CC, those tokens still count against your quota so you'd still be hitting it awkwardly fast.

◧◩
4. mystra+T2[view] [source] [discussion] 2026-01-23 01:00:51
>>genewi+y
Its the clanker version of the "Check Wallet Light" (check engine light).
◧◩
5. vunder+g4[view] [source] [discussion] 2026-01-23 01:13:19
>>genewi+y
Anecdotally but it definitely feels like in the last couple weeks CC tends to be more aggressive at pulling in significantly larger chunks of an existing code base - even for some simple queries I'll see it easily ramp up to 50-60k token usage.
replies(4): >>genewi+26 >>troyvi+uu >>behnam+ZB >>vidarh+lQ
◧◩◪
6. genewi+26[view] [source] [discussion] 2026-01-23 01:28:21
>>vunder+g4
I'm curious if anyone has logged the number of thinking tokens over time. My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

they get to see (if not opted-out) your context, idea, source code, etc. and in return you give them $220 and they give you back "out of tokens"

replies(3): >>throwu+v8 >>jumplo+ko >>Nitpic+ay
7. vbezhe+g8[view] [source] 2026-01-23 01:48:28
>>aanet+(OP)
This whole API vs plan looks weird to me. Why not force everyone to use API? You pay for what you use, it's very simple. API should be the most honest way to monetize, right?

This fixed subscription plan with some hardly specified quotas looks like they want to extract extra money from these users who pay $200 and don't use that value, at the same time preventing other users from going over $200. Like I understand that it might work at scale, but just feels a bit not fair to everyone?

replies(5): >>rootus+wl >>throwu+Ir >>nobody+1D >>sailfa+0Q1 >>thadk+Uw7
◧◩◪◨
8. throwu+v8[view] [source] [discussion] 2026-01-23 01:50:41
>>genewi+26
> My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

It's also a way to improve performance on the things their customers care about. I'm not paying Anthropic more than I do for car insurance every month because I want to pinch ~~pennies~~ tokens, I do it because I can finally offload a ton of tedious work on Opus 4.5 without hand holding it and reviewing every line.

The subscription is already such a great value over paying by the token, they've got plenty of space to find the right balance.

9. bmurph+P9[view] [source] 2026-01-23 02:08:56
>>aanet+(OP)
I've been hitting the limit a lot lately as well. The worst part is I try to compact things and check my limits using the / commands and can't make heads or tails how much I actually have left. It's not clear at all.

I've been using CC until I run out of credits and then switch to Cursor (my employer pays for both). I prefer Claude but I never hit any limits in Cursor.

replies(1): >>rubenf+k02
10. heavys+ua[view] [source] 2026-01-23 02:17:33
>>aanet+(OP)
Like a good dealer, they gave you a cheap/free hit and now you want more. This time you're gonna have to pay.
replies(1): >>codera+Mhc
◧◩
11. arthur+Bb[view] [source] [discussion] 2026-01-23 02:31:04
>>genewi+y
Their system prompt + MCP is more of the culprit here. 16 tools, sophisticated parameters, you're looking at 24K tokens minimum
12. Millio+uc[view] [source] 2026-01-23 02:39:41
>>aanet+(OP)
I very recently (~ 1 week ago) subscribed to the Pro plan and was indeed surprised by how fast I reached my quota compared to say Codex with similar subscription tier. The UX is generally really cool with Claude Code, which left me with a bit of a bittersweet feeling of not even being able to truly explore all the possibilities since after just making basic planning and code changes I am already out of quota for experimenting with various ways of using subagents, testing background stuff etc.
replies(4): >>0x500x+4l >>behnam+NB >>jack_p+ae1 >>rubenf+KZ1
13. alexk6+bf[view] [source] 2026-01-23 03:03:53
>>aanet+(OP)
[BUG] Instantly hitting usage limits with Max subscription: https://github.com/anthropics/claude-code/issues/16157

It's the most commented issue on their GitHub and it's basically ignored by Anthropic. Title mentions Max, but commenters report it for other plans too.

replies(2): >>czk+3h >>quikoa+YJ
◧◩
14. czk+3h[view] [source] [discussion] 2026-01-23 03:19:40
>>alexk6+bf
“After creating a new account, I can confirm the quota drains 2.5x–3x slower. So basically Max (5x) on an older accounts is almost like Pro on a new one in terms of quota. Pretty blatant rug pull tbh.”

lol

replies(1): >>Aeolun+XX
◧◩
15. 0x500x+4l[view] [source] [discussion] 2026-01-23 03:54:49
>>Millio+uc
I use opencode with codex after all the shenanigans from anthropic recently. You might want to give that a shot!
◧◩
16. rootus+wl[view] [source] [discussion] 2026-01-23 03:58:58
>>vbezhe+g8
You're welcome to use the API, it asks you to do that when you run out of quota on your Pro plan. The next thing you find out is how expensive using the API is. More honest, perhaps, but you definitely will be paying for that.
replies(1): >>Jeff_B+gj1
◧◩◪◨
17. jumplo+ko[view] [source] [discussion] 2026-01-23 04:29:01
>>genewi+26
I believe Claude Code recently turned on max reasoning for all requests. Previously you’d have to set it manually or use the word “ultrathink”
◧◩
18. throwu+Ir[view] [source] [discussion] 2026-01-23 05:01:35
>>vbezhe+g8
Consumers like predictable billing more than they care about getting the most bang for their buck and beancounters like sticky recurring revenue streams more than they care about maximizing the profit margins for every user.
replies(1): >>charci+xy
◧◩◪
19. troyvi+uu[view] [source] [discussion] 2026-01-23 05:33:33
>>vunder+g4
This really speaks to the need to separate the LLM you use and the coding tool that uses it. LLM makers utilizing the SaaS model make money on the tokens you spend whether or not they need them. Tools like aider and opencode (each in their own way) use separate tools build a map of the codebase that they can use to work with code using fewer tokens. When I see posts like this I start to understand why Anthropic now blocks opencode.

We're about to get Claude Code for work and I'm sad about it. There are more efficient ways to do the job.

replies(1): >>ayewo+6y
◧◩◪◨
20. ayewo+6y[view] [source] [discussion] 2026-01-23 06:13:07
>>troyvi+uu
When you state it like that, I now totally understand why Anthropic have a strong incentive to kick out OpenCode.

OpenCode is incentivized to make a good product that uses your token budget efficiently since it allows you to seamlessly switch between different models.

Anthropic as a model provider on the other hand, is incentivized to exhaust your token budget to keep you hooked. You'll be forced to wait when your usage limits are reached, or pay up for a higher plan if you can't wait to get your fix.

CC, specifically Opus 4.5, is an incredible tool, but Anthropic is handling its distribution the way a drug dealer would.

replies(3): >>vidarh+CQ >>jack_p+Le1 >>Brian_+DY1
◧◩◪◨
21. Nitpic+ay[view] [source] [discussion] 2026-01-23 06:13:37
>>genewi+26
> My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

I've done RL training on small local models, and there's a strong correlation between length of response and accuracy. The more they churn tokens, the better the end result gets.

I actually think that the hyper-scalers would prefer to serve shorter answers. A token generated at 1k ctx length is cheaper to serve than one at 10k context, and way way cheaper than one at 100k context.

replies(1): >>genewi+5G
◧◩◪
22. charci+xy[view] [source] [discussion] 2026-01-23 06:16:50
>>throwu+Ir
I just like beong able to make like $250 of API calls for $20.
replies(1): >>Aeolun+Mb1
23. behnam+GB[view] [source] 2026-01-23 06:40:45
>>aanet+(OP)
> I've run out of quota on my Pro plan so many times in the past 2-3 weeks.

Waiting for Anthropic to somehow blame this on users again. "We investigated, turns out the reason was users used it too much".

◧◩
24. behnam+NB[view] [source] [discussion] 2026-01-23 06:41:46
>>Millio+uc
Use cliproxyapi and use any model in CC. I use Codex models in CC and it's the best of both worlds!
◧◩
25. behnam+RB[view] [source] [discussion] 2026-01-23 06:42:37
>>genewi+y
probably, because they recently said the ultrathink is enabled by default now.
replies(1): >>genewi+9G
◧◩◪
26. behnam+ZB[view] [source] [discussion] 2026-01-23 06:43:33
>>vunder+g4
> more aggressive at pulling in significantly larger chunks of an existing code base

They need more training data, and with people moving on to OpenCode/Codex, they wanna extract as much data from their current users as possible.

◧◩
27. nobody+1D[view] [source] [discussion] 2026-01-23 06:55:09
>>vbezhe+g8
The fixed fee plan is because the agent and the tools have internal choices/planning about cost. If you simply pay for API the only feedback to them that they are being too costly is for you to stop.

If you look at tool calls like MCP and what not you can see it gets ridiculous. Even though it's small for example calling pal MCP from the prompt is still burning tokens afaik. This is "nobody's" fault in this case really but you can see how the incentives are and we all need to think how to make this entire space more usable.

◧◩◪◨⬒
28. genewi+5G[view] [source] [discussion] 2026-01-23 07:21:54
>>Nitpic+ay
> there's a strong correlation between length of response and accuracy

i'd need to see real numbers. I can trigger a thinking model to generate hundreds of tokens and return a 3 word response (however many tokens that is), or switch to a non-thinking model of the same family that just gives the same result. I don't necessarily doubt your experience, i just haven't had that experience tuning SD, for example; which is also xformer based

I'm sure there's some math reason why longer context = more accuracy; but is that intrinsic to transformer-based LLMs? that is, per your thought that the 'scalers want shorter responses, do you think they are expending more effort to get shorter, equivalent accuracy responses; or, are they trying to find some other architecture or whatever to overcome the "limitations" of the current?

◧◩◪
29. genewi+9G[view] [source] [discussion] 2026-01-23 07:22:44
>>behnam+RB
does this translate into "the end-user's cost goes up"

by default?

◧◩
30. quikoa+YJ[view] [source] [discussion] 2026-01-23 07:57:24
>>alexk6+bf
It's not a bug it's a feature (for Anthropic).
replies(1): >>cyanyd+Yc1
◧◩◪
31. vidarh+lQ[view] [source] [discussion] 2026-01-23 08:54:51
>>vunder+g4
It's absolutely a work-around in part, but use sub-agents, have the top level pass in the data, and limit the tool use for the sub-agent (the front matter can specify allowed tools) so it can't read more.

(And once you've done that, also consider whether a given task can be achieved with a dumber model - I've had good luck switching some of my sub-agents to Haiku).

◧◩◪◨⬒
32. vidarh+CQ[view] [source] [discussion] 2026-01-23 08:57:56
>>ayewo+6y
OpenCode also would be incentivized to do things like having you configure multiple providers and route requests to cheaper providers where possible.

Controlling the coding tool absolutely is a major asset, and will be an even greater asset as the improvements in each model iteration makes it matter less which specific model you're using.

◧◩◪
33. Aeolun+XX[view] [source] [discussion] 2026-01-23 10:07:09
>>czk+3h
Your quota also seems to be higher after unsubscribing and resubscribing?
replies(1): >>Curiou+3V1
◧◩◪◨
34. Aeolun+Mb1[view] [source] [discussion] 2026-01-23 12:01:34
>>charci+xy
If only it was API calls. I like using it through claude code. But it would be infinitely more flexible if my $200 subscription worked through the API
replies(1): >>frotau+Fe1
◧◩◪
35. cyanyd+Yc1[view] [source] [discussion] 2026-01-23 12:10:47
>>quikoa+YJ
Its not a bug, it's a poorly defined business model!
◧◩
36. jack_p+ae1[view] [source] [discussion] 2026-01-23 12:20:30
>>Millio+uc
I remember a couple of weeks ago when people raved about Claude Code I got a feeling like there's no way this is sustainable, they must be using tokens like crazy if used as described. Guess Anthropic did the math as well and now we're here.
◧◩◪◨⬒
37. frotau+Fe1[view] [source] [discussion] 2026-01-23 12:25:13
>>Aeolun+Mb1
I don't understand, you CAN use claude code through the API.
replies(1): >>horsaw+nq1
◧◩◪◨⬒
38. jack_p+Le1[view] [source] [discussion] 2026-01-23 12:25:40
>>ayewo+6y
You think after 27 billions invested they're gonna be ethical or want to get their money back as fast as possible?
◧◩◪
39. Jeff_B+gj1[view] [source] [discussion] 2026-01-23 12:58:58
>>rootus+wl
I tried the API once. Burned 7 dollars in 15 minutes.
◧◩◪◨⬒⬓
40. horsaw+nq1[view] [source] [discussion] 2026-01-23 13:42:48
>>frotau+Fe1
Yeah, but he can't use his $200 subscription for the API.

That's limited to accessing the models through code/desktop/mobile.

And while I'm also using their subscriptions because of the cost savings vs direct access, having the subscription be considerably cheaper than the usage billing rings all sorts of alarm bells that it won't last.

◧◩
41. sailfa+0Q1[view] [source] [discussion] 2026-01-23 15:55:53
>>vbezhe+g8
Not a doctor or anything, but API usage seems to support the more on-demand / spiky workflows available at a much larger scale, whereas a single seat, authenticated to Claude Code has controlled / set capacity and is generally more predictable and as a result easier to price?

API request method might have no cap, but they do cap Claude Code even on Max licenses, so easier to throttle as well if needed to control costs. Seems straightforward to me at any rate. Kinda like reserved instance vs. spot pricing models?

◧◩◪◨
42. Curiou+3V1[view] [source] [discussion] 2026-01-23 16:18:39
>>Aeolun+XX
They'll also send you a free month of the 100 dollar plan if you unsubscribe to try and get you back.
replies(1): >>0x9e37+c72
◧◩◪◨⬒
43. Brian_+DY1[view] [source] [discussion] 2026-01-23 16:36:03
>>ayewo+6y
It's like the very first days of computers at all. IBM supplied both the hardware and the software, and the software did not make the most efficient use of the hardware.

Which was nothing new itself of course. Conflicts of interest didn't begin with computers, or probably even writing.

◧◩
44. rubenf+KZ1[view] [source] [discussion] 2026-01-23 16:41:38
>>Millio+uc
The best thing about the max plan has been that I don’t have “range anxiety” with my workflows. This opens me to trying random things on a whim and explore the outer limits of the LLM capabilities more.
◧◩
45. rubenf+k02[view] [source] [discussion] 2026-01-23 16:44:17
>>bmurph+P9
Hmm, are you using the /usage command? There’s also the ccusage package that I find useful.
replies(1): >>bmurph+wi2
◧◩◪◨⬒
46. 0x9e37+c72[view] [source] [discussion] 2026-01-23 17:18:45
>>Curiou+3V1
or (tested on Max x20 plan) when the subscription renewal fails by any reason (they try charge your CC multiple times) then you are still in for 2+ weeks till it dies
◧◩◪
47. bmurph+wi2[view] [source] [discussion] 2026-01-23 18:15:55
>>rubenf+k02
Thanks. I don't know why but I just I couldn't find that command. I spent so much time trying to understand what /context and other commands were showing me I got lost in that noise.
◧◩
48. thadk+Uw7[view] [source] [discussion] 2026-01-25 17:11:00
>>vbezhe+g8
this Nov video talks thru via Cursor renegotiation, how in important ways a token != a token (not even within types like reasoning, output, or a model) because attention can lead to expensive logic loops. https://www.oreilly.com/videos/live-with-tim/0642572259488/ https://www.youtube.com/watch?v=6AXMO2dW0LI&list=PL055Epbe6d...
◧◩
49. codera+Mhc[view] [source] [discussion] 2026-01-27 01:08:56
>>heavys+ua
at least with physical drugs you know how many grams you bought, with cc it’s all hidden behind the curtain.
[go to top]