Could have just turned a blind eye.
If a company is going to automate our jobs, we shouldn't be giving them money and data to do so. They're using us to put ourselves out of work, and they're not giving us the keys.
I'm fine with non-local, open weights models. Not everything has to run on a local GPU, but it has to be something we can own.
I'd like a large, non-local Qwen3-Coder that I can launch in a RunPod or similar instance. I think on-demand non-local cloud compute can serve as a middle ground.
(Edit due to rate-limiting: I see, thanks -- I wasn't aware there was more than one token type.)
> hitting that limit is within the terms of the agreement with Anthropic
It's not, because the agreement says you can only use CC.
That's not the product you buy when you a Claude Code token, though.
Selling dollars for $.50 does that. It sounds like they have a business model issue to me.
it's like Apple: you can use macOS only on our Macs, iOS only on iPhones, etc. but at least in the case of Apple, you pay (mostly) for the hardware while the software it comes with is "free" (as in free beer).
It's within their capability to provision for higher usage by alternative clients. They just don't want to.
If people start using the Claude Max plans with other agent harnesses that don't use the same kinds of optimizations the economics may no longer have worked out.
(But I also buy that they're going for horizontal control of the stack here and banning other agent harnesses was a competitive move to support that.)
Without knowing the numbers it's hard to tell if the business model for these AI providers actually works, and I suspect it probably doesn't at the moment, but selling an oversubscribed product with baked in usage assumptions is a functional business model in a lot of spaces (for varying definitions of functional, I suppose). I'm surprised this is so surprising to people.
I've had a similar experience with opencode, but I find that works better with my local models anyway.
There are already many serious concerns about sharing code and information with 3rd parties, and those Chinese open models are dangerously close to destroying their entire value proposition.
This confused me for a while, having two separate "products" which are sold differently, but can be used by the same tool.
They seem to have started rejecting 3rd party usage of the sub a few weeks ago, before Claw blew up.
By the way, does anyone know about the Agents SDK? Apparently you can use it with an auth token, is anyone doing that? Or is it likely to get your account in trouble as well?
(There probably is, but I found it very hard to make sense of the UI and how everything works. Hard to change models, no chat history etc.?)
Being a common business model and it being functional are two different things. I agree they are prevalent, but they are actively user hostile in nature. You are essentially saying that if people use your product at the advertised limit, then you will punish them. I get why the business does it, but it is an adversarial business model.
The problem is, there's not a clear every-man value like Uber has. The stories I see of people finding value are sparse and seem from the POV of either technosexuals or already strong developer whales leveraging the bootstrapy power .
If AI was seriously providing value, orgs like Microsoft wouldn't be pushing out versions of windows that can't restart.
It clearly is a niche product unlike Uber, but it's definitely being invested in like it is universal product.
It'll be interesting to see what OpenAI and Anthropic will tell us about this when they go public (seems likely late this year--along with SpaceX, possibly)
its not. The idea is that majority subscribers don't hit limit, so they sell them dollar for 2. But there is minority which hit limit, and they effectively selling them dollar for 50c, but aggregated numbers could be positive.