For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.
That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.
The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.
Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.
Bubbles don't just mean tulips.
What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:
Runs on a local device, no way to extract profit to repay the cost of training.
Dot com is not super comparable to AI.
Dot com had very few users on the internet compared to today.
Dot com did not have ubiquitous e-commerce. The small group of users didn’t spend online.
Search engines didn’t have the amount of information online that there is today.
Dot com did not have usable high speed mobile data, or broadband available for the masses.
Dot com did not have social media to share and alas how things can work as quickly.
LLMs were largely applicable to industry when gpt 4 came out. We didn’t have the new terms of reference for non deterministic software.
"Can they keep charging money for it?", that's the question that matters here.
There were not as many consumers buying online during dot com boom.
To the extent currently more is being spent on AI than anything in the dot com boom.
Nor did companies run their businesses in the cloud, because there was no real broadband.
There’s no doubt there’s a hype train, there is also an adoption and disruption train, which is also happening.
I could go on, but I’m comfortable with seeing how well this comment ages.
My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.
Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.
This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".
Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.
Not much room to charge people for what runs on-device.
So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.
The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.
I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.
At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.
The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.
Let alone ones with credit card processing.
Internet users by year: https://www.visualcapitalist.com/visualized-the-growth-of-gl...
The ecommerce stats by year will interest you.