Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?
Ads playlist https://www.youtube.com/playlist?list=PLf2m23nhTg1OW258b3XBi...
I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?
Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?
Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
[0]: >>46873708
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.
This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.
It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.
There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
A lot of people are ok with ad supported free tiers
(Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)
- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.
Either way, both companies are hemorrhaging money.
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.
And I think that excess margin is enough to get past lowered ad revenue opportunity.
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
https://x.com/ns123abc/status/2019074628191142065
In any case, they draw undue attention to openAI rather than themselves. Not good advertising
Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term
Great by Anthropic, but I put basically no long term trust in statements like this.
> ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.
Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page's early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.
sorry but this is silly, nothing suggests this at all.
Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.
In this animal farm Orwellian cycle we’ve been going through, at least they start here, unlike others.
I for one commend this, but stay vigilant.
However, I do think we need to take Anthropic's word with a grain of salt, too. To say they're fully working in the user's interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.
I use it as codegen too but I easily have 20x more brainstorming conversations than code projects
Most non-tech people I talk to are finding value with it with traditional things. The main one I've seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips/activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It's very good for open ended plans too, which the travel sites have been horrible at. For instance, "I want to plan a trip to somewhere warm and beachy I don't care about the dates or exactly where" maybe I care about the budget up front but most things I'm flexible on - those kinds of things work well as a conversation.
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
Littering a potentially quality product with ads which one cannot easily separate is what the evil is.
Anthropic being a PBC probably helps.
I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)
Facts don't care about your feelings
When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.
Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...
Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.
https://epoch.ai/gradient-updates/can-ai-companies-become-pr...
Their AWS spend being higher than their revenue might hint at the same.
Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.
To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.
[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*
Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.
In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.
In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.
The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.
I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
Plus, I’m not a huge fan of Sam Altman.
I end up using ChatGPT for general coding tasks because of the limited session/weekly limit Claude pro offers, and it works surprisingly well.
The best is IMO to use them both. They complement each other.
Isn't that a distinction without a difference? Every real world company has employees, and those people do have values (well, except the psychopaths).
The point about filtering signal vs. noise in search engines can’t really be stated enough. At this point using a search engine and the conventional internet in general is an exercise in frustration. It’s simply a user hostile place – infinite cookie banners for sites that shouldn’t collect data at all, auto play advertisements, engagement farming, sites generated by AI to shill and produce a word count. You could argue that AI exacerbates this situation but you also have to agree that it is much more pleasant to ask perplexity, ChatGPT or Claude a question than to put yourself through the torture of conventional search. Introducing ads into this would completely deprive the user of a way of navigating the web in a way that actually respects their dignity.
I also agree in the sense that the current crop of AIs do feel like a space to think as opposed to a place where I am being manipulated, controlled or treated like some sheep in flock to be sheared for cash.
But combined with the other projects Anthropic has pursued (e.g. around understanding bias and explaining "how the model is thinking as it is") and decisions it has made, I'm happy with the course they're plotting. They seem consistently upstanding, thoughtful, and respectful. I want to commend them and earnestly say: Keep up the good work!
Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.
Here's a snip from their IPO letter [0]:
Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.
Anthropic's statement reads the same way, and it's refreshing to see them prioritize long-term values like trust over short-term monetization.
It's hard to put a dollar value on trust, but even when they fall short of their ideals, it's still a big differentiator from competitors like Microsoft, Meta and OpenAI.
I'd bet that a large portion of Google's enterprise value today can be traced to that trust differential with their competitors, and I wouldn't be surprised to see a similar outcome for Anthropic.
Don't be evil, but unironically.
[0] https://abc.xyz/investor/founders-letters/ipo-letter/default...
But if nothing else, I can appreciate Anthropic's current values, and hope they will last as long as possible...
Forgive me if I am not.
It would seem to be ironic if they put ads in their models to say they won't put ads.
The obvious assumed premise of this argument is that Anthropic are actually on the path toward creating super-intelligent AGI. Many people, including myself, are skeptical of this. (In fact I would go farther - in my opinion, cosplaying as though their AI is so intelligent that it's dangerous has become a marketing campaign for Anthropic, and their rhetoric around this topic should usually be taken with a grain of salt.)
If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go.
Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.
You cannot trust answers from an LLM.
Exactly this. Show me the incentive, and I'll show you the outcome, but at least I'm glad we're getting a bit more time ad-free.
It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary.
If someone is trying to influence your results, running the inference on your own infrastructure prevents some attack vectors but not some of the more plausible and worrying ones.
Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different.
Their answers are inline with this version:
You don't. Companies want people to think they have values. But companies are not people. Companies exist to earn money.
> That hasn't happen with Anthropic for me.
Yet.
As it’s often said: there is no such thing as free product, you are the product. AI training is expensive even for Chinese companies.
Palantir, to me, is the weaponization of big data, where advanced analytics are used to target vulnerable populations. Not just abroad, but here against its own citizens. It is the dystopic enabler that we have been warned about.
Palantir and the words from its leadership seem to me to be in direct opposition to parts of the Constitution doc that Anthropic hold up to show their ethics and seriousness.
Ok, I know I'm describing the past with rosy glasses. After all, the Internet started as a DARPA project. But still, current reality is itself rather dystopic in many ways.
> What I think is clear is they have to build an advertising product, and the reason they have to build an advertising product is any consumer Internet product has to be advertising, because it’s such a beneficial model to everyone involved, and the reason it’s so beneficial is you get to indefinitely and infinitely increase average revenue per user without any worries about price elasticity, because the entire increase in average revenue per user is borne by the advertisers who are paying it willingly because they’re getting a positive return on their investment, and everyone’s using it for free so you can reach the whole world. Then what happens with that is once you get that model going, you have a massive R&D advantage, because you have so much more money coming in than anyone who doesn’t have that cycle or who has to charge users for it.
https://stratechery.com/2026/ads-in-chatgpt-why-openai-needs...
> This point, more than anything else, explains why the company so desperately needs an advertising model. Advertising is the only potential business model that can meaningfully bend the revenue curve such that the company can not just fund its compute but gain leverage on it, for all of the reasons I laid out before: first, advertising increases the breadth of the business, in that you can offer a better product to more people, increasing usage and expanding inventory. Second, advertising increases the depth of the business, in that there is infinite upside in terms of average revenue per user: more usage means more inventory on one hand, and building out the capability for effective targeting and high conversion rates increases the amount that advertisers are willing to pay — even as the cost to the user remains the same (ideally free).
It's valuable to remember that advertisers will pay more per user than users will, and that's hard to beat in a competitive market.
The first imperative is a company must survive past its employees. A company is an explicit legal structure designed to survive past the initial people in the company. A company is _not_ the employees, it is what survives past the employees' employment.
The second imperative is the diffusion of responsibility. A company becomes the responsible party for actions taken, not individual employees. This is part of the reason we allow companies to survive past employees, because their obligations survive as well.
This leads to individual employees taking actions for the company against their own moral code for the good of the company.
See also The Corporation (2003 film) and Meditations On Moloch (2014)[0].
[0] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Ads are more then obsolete
Using ads in AI, today, is like printing flyers when the Internet started, (or sending email ads 30 years ago: of course Google could have promised: "we will not send spam to your gmail mailbox..." :).
"Ads" aim to influence your behavior: AI is a doing much much more with no need for ads (Claude included)
Who do they think believe the whole "don't be evil" in 2026?
We know what's around the corner. Enshitification, loss of trust, frog boiling, account restrictions and upsell, advertising, degradation of service, data sold for advertising and worse.
Y'ain't kidding anyone with this stuff. You're only providing screenshots for future memes.
Google delivered on their promise, and OpenAI well it's too soon but it's looking good.
The name OpenAI and its structure is a relic from a world where the sentiment was to be heavily preoccupied and concerned by the potential accidental release of an AGI.
Now that it's time for products the name and the structure are no longer serving the goal
1) Yes, they are absolutely useless in a consumer setting. 2) If you want to be a software developer, you absolutely need to know how to understand/interact with one, and you more than likely will need to understand things like https://continue.dev.
I am no longer in software development due to my body slowly (quickly) dying, however I see it all from the sidelines:
1) New tech was rushed to the front lines way too quickly by big tech. 2) Big (and small tech) rushed layoffs way too fast rather than let we devs explore the advantages vs. disadvantages. 3) Companies blame "AI" (LLMs) for layoffs. 4) Most senior devs (including myself) soundly reject AI due to the above. 5) New generation of devs uses AI tools, some struggle occurs where morons don't bother reviewing code that was written by an auto completion engine. 6) We nerds begin to understand the usefulness of LLMs for "the boring part"
Not a shareholder of any company. I'm permanently disabled. Just watching this stuff from the sidelines.
Of course they are losing money in total. They are not, however, losing money per marginal token.
It’s trivial to see this by looking at the market clearing price of advanced open source models and comparing to the inference prices charged by OpenAI.
[0] https://openai.com/index/our-approach-to-advertising-and-exp...
(Also, wealth maximization is a dumb goal and not how successful companies work. Cynicism is a bad strategy for being rich because it's too shortsighted.)
Claude is somewhat sycophantic but nowhere near 4o levels. (or even Gemini 3 levels)
0. https://www.npr.org/2020/01/22/796801746/max-richter-tiny-de...
Of course I realise they would never do something like that. Buy why not? Well, because they might decide they want to run ads...
I then pointed out this same inconsistency to her, and that she shouldn't put stock in what Gemini says. Testing it myself, it would give results between 47c-57c. And sometimes it would just trip out and give the health-approved temperature, which is 74c (!).
Edit: just tested it again and it still happens. But inconsistency isn't a surprise for anyone who actually knows how LLMs work.
That's my entire point. Even adding an "is" or "the" can get you way different advice. No human would give you different info when you ask "what's the waterfowl's best cooking temperature" vs "what is waterfowl's best roasting temperature".