zlacker

[return to "Claude is a space to think"]
1. waldop+dm1[view] [source] 2026-02-04 19:01:29
>>meetpa+(OP)
I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.

You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.

Either way, both companies are hemorrhaging money.

◧◩
2. johnsi+Cv1[view] [source] 2026-02-04 19:48:23
>>waldop+dm1
Both companies are making bank on inference
◧◩◪
3. lysace+sz1[view] [source] 2026-02-04 20:07:48
>>johnsi+Cv1
That is the big question. Got reliable data on that?

(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)

◧◩◪◨
4. tvink+rH1[view] [source] 2026-02-04 20:42:11
>>lysace+sz1
https://www.wheresyoured.at/costs/

Their AWS spend being higher than their revenue might hint at the same.

Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.

◧◩◪◨⬒
5. rcxdud+on3[view] [source] 2026-02-05 09:55:59
>>tvink+rH1
The closed frontier models seem to sell at a substantial premium to inference on open-source models, so that does suggest that there is a decent margin to the inference. The training is where they're losing money, and the bull case is that every model makes money eventually, but the models keep getting bigger or at least more expensive to train, so they're borrowing money to make even more money later (which does need to converge somehow, i.e. they can't just keep shooting larger until the market can't actually afford to pay for the training). The bear case is that this is basically just a treadmill to stay on the frontier where they can make that premium (if the big labs ever stop they'll quickly get caught up by cheaper or even open-source models and lose their edge), in which case it's probably never going to actually become sustainable.
[go to top]