zlacker

[return to "Perverse incentives of vibe coding"]
1. vansch+T6[view] [source] 2025-05-14 20:18:12
>>laurex+(OP)
> Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.

Though I'm not a "vibe coder" myself I very much recognize this as part of the "appeal" of GenAI tools more generally. Trying to get Image Generators to do what I want has a very "gambling-like" quality to it.

◧◩
2. dingnu+L7[view] [source] 2025-05-14 20:24:50
>>vansch+T6
it's not like gambling, it is gambling. you exchange dollars for chips (tokens -- some casinos even call the chips tokens) and insert it into the machine in exchange for the chance of a prize.

if it doesn't work the first time you pull the lever, it might the second time, and it might not. Either way, the house wins.

It should be regulated as gambling, because it is. There's no metaphor, the only difference from a slot machine is that AI will never output cash directly, only the possibility of an output that could make money. So if you're lucky with your first gamble, it'll give you a second one to try.

Gambling all the way down.

◧◩◪
3. Nathan+xc[view] [source] 2025-05-14 20:52:35
>>dingnu+L7
This only makes sense if you have an all or nothing concept of the value of output from AI.

Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.

The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.

One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.

◧◩◪◨
4. PaulDa+Ji[view] [source] 2025-05-14 21:36:14
>>Nathan+xc
This assumes you can easily and reliably identify the 10% you need to fix.
◧◩◪◨⬒
5. Nathan+hv[view] [source] 2025-05-14 23:28:24
>>PaulDa+Ji
Why wouldn't you be able to do identify the 10% that you need to fix?

AI is not an excuse to turn off your brain. I find it ironic that many people complain that they have a hard time identifying the hallucinations in LLM generated content, and then also complain that LLM's are making LLM users dumber.

The problem here is also the solution. LLM's make smarter people even smarter, because they get even better at thinking about the hard parts, while not wasting time thinking about the easy parts.

But people who don't want to think at all about what they are doing... well they do get dumber.

◧◩◪◨⬒⬓
6. PaulDa+Hv[view] [source] 2025-05-14 23:32:10
>>Nathan+hv
It is extremely well known in the world of programming that reading code is substantially harder than writing it. Just because you have the code in front of you does not mean that determining that it is correct is a trivial (or even moderately easy) task.
[go to top]