zlacker

[return to "Perverse incentives of vibe coding"]
1. tippyt+L5[view] [source] 2025-05-14 20:10:15
>>laurex+(OP)
This article captures a lot of the problem. It’s often frustrating how it tries to work around really simple issues with complex workarounds that don’t work at all. I tell it the secret simple thing it’s missing and it gets it. It always makes me think, god help the vibe coders that can’t read code. I actually feel bad for them.
◧◩
2. martin+kb[view] [source] 2025-05-14 20:45:58
>>tippyt+L5
> I tell it the secret simple thing it’s missing and it gets it.

Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.

Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.

◧◩◪
3. ben_w+kd[view] [source] 2025-05-14 20:58:16
>>martin+kb
Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...

[go to top]