It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.
My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.
Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.
I have pro subscriptions to all three major providers, and have been planning to drop one eventually. Anthropic may end up making the decision for me, it sounds like, even though (or perhaps because) I've been using Claude CLI more than the others lately.
What I'd really like to do is put a machine in the back room that can do 100 tts or more with the latest, greatest Deepseek or Kimi model at full native quantization. That's the only way to avoid being held hostage by the big 3 labs and their captive government, which I'm guessing will react to the next big Chinese model release by prohibiting its deployment by any US hosting providers.
Unfortunately it will cost about $200K to do this locally. The smart money says (but doesn't act like) the "AI bubble" will pop soon. If the bubble pops, that hardware will be worth 20 cents on the dollar if I'm lucky, making such an investment seem reckless. And if the bubble doesn't pop, then it will probably cost $400K next year.
First-world problems, I guess...
https://forums.developer.nvidia.com/t/6x-spark-setup/354399/...
Or a single user at about 10tps.
This is probably around $30k if you go with the 1tb models.
When people talk about the cost and requirements of AI, other people can't grasp what they are talking about.