Videogame speed running has this problem solved. Livestream your 10x engineer LLM usage, a git commit annotated with it's prompt per change. Then everyone will see the result.
This doesn't seem like an area of debate. No complicated diagrams required. Just run the experiment and show the result.
People always say "you just need to learn to prompt better" without providing any context as to what "better" looks like. (And, presumes that my prompt isn't good enough, which maybe it is maybe it isn't.)
The easy way out of that is "well every scenario is different" - great, show me a bunch of scenarios on a speed run video across many problems, so I can learn by watching.
If I use LLMs to code, say a Telegram bot that summarise the family calendars and current weather to a channel - someone will come in saying "but LLMs are shit because they can't handle this very esoteric hardware assembler I use EVERY DAY!!1"
You will be writing CRUD operations and slapping together web apps on every level of experience. Even in (mobile) gaming there you're repeating the same structures as every game before.
Not a 100% of the time, but way more than 50%.