zlacker

[return to "Perverse incentives of vibe coding"]
1. tippyt+L5[view] [source] 2025-05-14 20:10:15
>>laurex+(OP)
This article captures a lot of the problem. It’s often frustrating how it tries to work around really simple issues with complex workarounds that don’t work at all. I tell it the secret simple thing it’s missing and it gets it. It always makes me think, god help the vibe coders that can’t read code. I actually feel bad for them.
◧◩
2. martin+kb[view] [source] 2025-05-14 20:45:58
>>tippyt+L5
> I tell it the secret simple thing it’s missing and it gets it.

Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.

Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.

◧◩◪
3. Suppaf+Jj[view] [source] 2025-05-14 21:45:54
>>martin+kb
>Anthropomorphizing LLMs is not helpful

It's a feature of language to describe things in those terms even if they aren't accurate.

>using terms which better describe how it really works is important

Sometimes, especially if you doing something where that matters, but abstracting those details away is also useful when trying to communicate clearly in other contexts.

[go to top]