Videogame speed running has this problem solved. Livestream your 10x engineer LLM usage, a git commit annotated with it's prompt per change. Then everyone will see the result.
This doesn't seem like an area of debate. No complicated diagrams required. Just run the experiment and show the result.
People always say "you just need to learn to prompt better" without providing any context as to what "better" looks like. (And, presumes that my prompt isn't good enough, which maybe it is maybe it isn't.)
The easy way out of that is "well every scenario is different" - great, show me a bunch of scenarios on a speed run video across many problems, so I can learn by watching.
This thread has hundreds of comments where people are screaming that everyone needs to learn AI coding.
If it was such an edge would they not otherwise keep quiet?
Imagine that there was a serum that gives you superhuman strength only under specific conditions that you’re supposed to discover. Then there’s half room who screams that it should be banned, because it is cheating/fake/doesn’t work. And there’s another half room that swears by it, because they know how to utilize it properly.
You know it works and you don’t want to give up your secret sauce or make another half of the room stronger.
I’m not alone in this - there are tons of other examples of people showing how they use LLMs online; you just need to search for them.
Let's all just muse some and imagine what the next cycle of this wheel will look like.
If I use LLMs to code, say a Telegram bot that summarise the family calendars and current weather to a channel - someone will come in saying "but LLMs are shit because they can't handle this very esoteric hardware assembler I use EVERY DAY!!1"
You will be writing CRUD operations and slapping together web apps on every level of experience. Even in (mobile) gaming there you're repeating the same structures as every game before.
Not a 100% of the time, but way more than 50%.
However real life does have illicit drugs that many people hype up and claim that they need.
Also real life has performance enhancement drugs that cause a host of medical issues.
Even drugs for medical necessity come with a list of side effects.
The article provides zero measurement, zero examples, zero numbers.
It's pure conjecture with no data or experiment to back it up. Unfortunately conjecture rises to the top on hackernews. A well built study on LLM effectiveness would fall off the front page quickly.
You could say it is a lack of imagination or not connecting the dots, but I think there is a more human reason. A lot of people don't want the disruption and are happy with the status quo. I'm a software engineer so I know how problematic AI may be for my job, but I think anyone who looks at our current state and the recent improvements should be able to see the writing on the wall here.
I for one am more curious than afraid of AI, because I have always felt that writing code was the worst part of being a programmer. I am much happier building product or solving interesting problems than tracking down elusive bugs or refactoring old codebases.
And it seems pretty obvious why. The benefits were clear and palpable. Communication was going to become a heck of a lot easier, faster, cheaper, barriers were being lowered.
There's no such qualitative advantage offered by GenAI, compared to the way we did things before. Web vs. pre-Web, the benefits were clear.
GenAI? Some execs claim it's making stuff cheaper, but it doesn't consider quality and long-term effects, plus it's spouted by those with no technological knowledge and with a reputation to long have cashed out and moved on by the time their actions crash a company. Plus, still nobody seems to have figured out how to make money (real money, not VC) off of this. Faster -- again, at what price to quality?
Then there's the predictions. We've been told for about three years now about the explosive rise in quality we'll see from GenAI output. I'm still waiting. The predictions of wider spread, higher speed and lower cost of the web sounded plausible, and they materialised. Comparatively, I see a lot of very well-reasoned arguments for the hypothesis that GenAI has peaked (for now) and this is pretty much as good as it's going to get, with source data sets exhausted and increasingly polluted by poor GenAI slop. So far, the trajectory makes me believe this scenario to be a lot more likely.
None of this seems remotely comparable to the Internet or web cases to me. The web certainly didn't feel like a hype to me in the 90s and I don't remember anyone having had that view.