zlacker

[return to "My AI skeptic friends are all nuts"]
1. Verdex+5x[view] [source] 2025-06-03 00:58:24
>>tablet+(OP)
Hundreds of comments. Some say LLMs are the future. Others say they don't work today and they won't work tomorrow.

Videogame speed running has this problem solved. Livestream your 10x engineer LLM usage, a git commit annotated with it's prompt per change. Then everyone will see the result.

This doesn't seem like an area of debate. No complicated diagrams required. Just run the experiment and show the result.

◧◩
2. dbalat+BH[view] [source] 2025-06-03 02:42:57
>>Verdex+5x
I'd honestly love to see this.

People always say "you just need to learn to prompt better" without providing any context as to what "better" looks like. (And, presumes that my prompt isn't good enough, which maybe it is maybe it isn't.)

The easy way out of that is "well every scenario is different" - great, show me a bunch of scenarios on a speed run video across many problems, so I can learn by watching.

◧◩◪
3. theshr+Tq1[view] [source] 2025-06-03 10:56:25
>>dbalat+BH
It's because you get to the No True Scotsman -thing pretty fast.

If I use LLMs to code, say a Telegram bot that summarise the family calendars and current weather to a channel - someone will come in saying "but LLMs are shit because they can't handle this very esoteric hardware assembler I use EVERY DAY!!1"

◧◩◪◨
4. lomase+xA1[view] [source] 2025-06-03 12:19:54
>>theshr+Tq1
But... do you know anybody who will give me 50k a year to write telegram bots for them?
[go to top]