It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
[0]: >>46873708
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
Anthropic being a PBC probably helps.
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.
Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.
[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*
Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.
In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.
In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.
The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.
I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
Isn't that a distinction without a difference? Every real world company has employees, and those people do have values (well, except the psychopaths).
The obvious assumed premise of this argument is that Anthropic are actually on the path toward creating super-intelligent AGI. Many people, including myself, are skeptical of this. (In fact I would go farther - in my opinion, cosplaying as though their AI is so intelligent that it's dangerous has become a marketing campaign for Anthropic, and their rhetoric around this topic should usually be taken with a grain of salt.)
You don't. Companies want people to think they have values. But companies are not people. Companies exist to earn money.
> That hasn't happen with Anthropic for me.
Yet.
As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.
The first imperative is a company must survive past its employees. A company is an explicit legal structure designed to survive past the initial people in the company. A company is _not_ the employees, it is what survives past the employees' employment.
The second imperative is the diffusion of responsibility. A company becomes the responsible party for actions taken, not individual employees. This is part of the reason we allow companies to survive past employees, because their obligations survive as well.
This leads to individual employees taking actions for the company against their own moral code for the good of the company.
See also The Corporation (2003 film) and Meditations On Moloch (2014)[0].
[0] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Google delivered on their promise, and OpenAI well it's too soon but it's looking good.
The name OpenAI and its structure is a relic from a world where the sentiment was to be heavily preoccupied and concerned by the potential accidental release of an AGI.
Now that it's time for products the name and the structure are no longer serving the goal
(Also, wealth maximization is a dumb goal and not how successful companies work. Cynicism is a bad strategy for being rich because it's too shortsighted.)
Claude is somewhat sycophantic but nowhere near 4o levels. (or even Gemini 3 levels)