zlacker

[parent] [thread] 19 comments
1. waldop+(OP)[view] [source] 2026-02-04 19:01:29
I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.

You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.

Either way, both companies are hemorrhaging money.

replies(3): >>johnsi+p9 >>guidoi+Lh >>Gud+Fr
2. johnsi+p9[view] [source] 2026-02-04 19:48:23
>>waldop+(OP)
Both companies are making bank on inference
replies(4): >>ehsanu+Ha >>lysace+fd >>waldop+Wf >>exitb+7g
◧◩
3. ehsanu+Ha[view] [source] [discussion] 2026-02-04 19:56:38
>>johnsi+p9
Could you substantiate that? That take into account training and staffing costs?
replies(1): >>ihsw+9d
◧◩◪
4. ihsw+9d[view] [source] [discussion] 2026-02-04 20:07:08
>>ehsanu+Ha
The parent specifically said inference, which does not include training and staffing costs.
replies(1): >>ehsanu+M81
◧◩
5. lysace+fd[view] [source] [discussion] 2026-02-04 20:07:48
>>johnsi+p9
That is the big question. Got reliable data on that?

(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)

replies(2): >>simian+Vi >>tvink+el
◧◩
6. waldop+Wf[view] [source] [discussion] 2026-02-04 20:21:13
>>johnsi+p9
You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation

https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...

replies(2): >>mh2266+M51 >>mediam+M91
◧◩
7. exitb+7g[view] [source] [discussion] 2026-02-04 20:21:42
>>johnsi+p9
Maybe on the API, but I highly doubt that the coding agent subscription plans are profitable at the moment.
replies(2): >>tvink+ok >>lawren+vW
8. guidoi+Lh[view] [source] 2026-02-04 20:28:27
>>waldop+(OP)
> ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.

replies(1): >>panark+7w
◧◩◪
9. simian+Vi[view] [source] [discussion] 2026-02-04 20:32:50
>>lysace+fd
> If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.

https://epoch.ai/gradient-updates/can-ai-companies-become-pr...

replies(1): >>lysace+Ij
◧◩◪◨
10. lysace+Ij[view] [source] [discussion] 2026-02-04 20:36:19
>>simian+Vi
The context of that quote is OpenAI as a whole.
◧◩◪
11. tvink+ok[view] [source] [discussion] 2026-02-04 20:39:14
>>exitb+7g
For sure not
◧◩◪
12. tvink+el[view] [source] [discussion] 2026-02-04 20:42:11
>>lysace+fd
https://www.wheresyoured.at/costs/

Their AWS spend being higher than their revenue might hint at the same.

Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.

13. Gud+Fr[view] [source] 2026-02-04 21:10:29
>>waldop+(OP)
Disagree.

I end up using ChatGPT for general coding tasks because of the limited session/weekly limit Claude pro offers, and it works surprisingly well.

The best is IMO to use them both. They complement each other.

replies(1): >>stavro+ZI
◧◩
14. panark+7w[view] [source] [discussion] 2026-02-04 21:32:01
>>guidoi+Lh
Before Google, web search was a toxic stew of conflicts of interest. It was impossible to tell if search results were paid ads or the best possible results for your query.

Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.

Here's a snip from their IPO letter [0]:

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.

Anthropic's statement reads the same way, and it's refreshing to see them prioritize long-term values like trust over short-term monetization.

It's hard to put a dollar value on trust, but even when they fall short of their ideals, it's still a big differentiator from competitors like Microsoft, Meta and OpenAI.

I'd bet that a large portion of Google's enterprise value today can be traced to that trust differential with their competitors, and I wouldn't be surprised to see a similar outcome for Anthropic.

Don't be evil, but unironically.

[0] https://abc.xyz/investor/founders-letters/ipo-letter/default...

replies(1): >>AceJoh+vA
◧◩◪
15. AceJoh+vA[view] [source] [discussion] 2026-02-04 21:55:21
>>panark+7w
I agree. Having watched Google shift from its younger idealistic values to its current corrupted state, I can't help but be cynical about Anthropic's long-term trajectory.

But if nothing else, I can appreciate Anthropic's current values, and hope they will last as long as possible...

◧◩
16. stavro+ZI[view] [source] [discussion] 2026-02-04 22:38:07
>>Gud+Fr
I use OpenCode and I made an "architect" agent that uses Opus to make a plan, then gives that plan to a "developer" agent (with Sonnet) that implements it, and a "reviewer" agent (Codex) reviews it in the end. I've gotten much better results with this than with straight up Opus throughout, and obviously hit the limits much less often as well.
◧◩◪
17. lawren+vW[view] [source] [discussion] 2026-02-05 00:00:41
>>exitb+7g
Build out distribution first and generate network effects.
◧◩◪
18. mh2266+M51[view] [source] [discussion] 2026-02-05 01:11:20
>>waldop+Wf
> green visor crowds

??

◧◩◪◨
19. ehsanu+M81[view] [source] [discussion] 2026-02-05 01:36:31
>>ihsw+9d
But those aren't things you can really separate for proprietary models. Keeping inference running also requires staff, not just for the R&D.
◧◩◪
20. mediam+M91[view] [source] [discussion] 2026-02-05 01:44:36
>>waldop+Wf
The comment was with reference to inference, not total P&L.

Of course they are losing money in total. They are not, however, losing money per marginal token.

It’s trivial to see this by looking at the market clearing price of advanced open source models and comparing to the inference prices charged by OpenAI.

[go to top]