Here are several real stories I dug into:
"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed
"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs
"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers
"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF
Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.
I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.
Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.