zlacker

[return to "AI agents are starting to eat SaaS"]
1. benzib+xN[view] [source] 2025-12-15 07:45:57
>>jnord+(OP)
I'm CTO at a vertical SaaS company, paired with a product-focused CEO with deep domain expertise. The thesis doesn't match my experience.

For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.

I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.

The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.

◧◩
2. Bombth+HU[view] [source] 2025-12-15 08:57:58
>>benzib+xN
Yap. AI and agents help to centralise, not decentralise
◧◩◪
3. ben_w+Rb1[view] [source] 2025-12-15 11:22:20
>>Bombth+HU
For now.

I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.

That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.

◧◩◪◨
4. j45+hh1[view] [source] 2025-12-15 12:00:10
>>ben_w+Rb1
The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.

The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.

◧◩◪◨⬒
5. ben_w+Ll1[view] [source] 2025-12-15 12:33:06
>>j45+hh1
> The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.

Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.

Bubbles don't just mean tulips.

What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:

Runs on a local device, no way to extract profit to repay the cost of training.

◧◩◪◨⬒⬓
6. somewh+HN2[view] [source] 2025-12-15 19:28:49
>>ben_w+Ll1
> repay the cost of training

Key point. Once people realize that no money can be made from LLMs, they will stop training new ones. Eventually the old ones will become hopelessly out-of-date, and LLMs will fade into history.

[go to top]