zlacker

[parent] [thread] 14 comments
1. tippyt+(OP)[view] [source] 2025-05-14 20:10:15
This article captures a lot of the problem. It’s often frustrating how it tries to work around really simple issues with complex workarounds that don’t work at all. I tell it the secret simple thing it’s missing and it gets it. It always makes me think, god help the vibe coders that can’t read code. I actually feel bad for them.
replies(4): >>r053bu+H >>iotku+61 >>grufko+35 >>martin+z5
2. r053bu+H[view] [source] 2025-05-14 20:16:13
>>tippyt+(OP)
I fear that’s going to end up being a significant portion of engineers in the future.
replies(1): >>babyen+U
◧◩
3. babyen+U[view] [source] [discussion] 2025-05-14 20:17:15
>>r053bu+H
I think we are in the Flash era again lol.

You remember those days right? All those Flash sites.

4. iotku+61[view] [source] 2025-05-14 20:18:07
>>tippyt+(OP)
There's a pretty big gap between "make it work" and "make it good".

I've found with LLMs I can usually convince them to get me at least something that mostly works, but each step compounds with excessive amounts of extra code, extraneous comments ("This loop goes through each..."), and redundant functions.

In the short term it feels good to achieve something 'quickly', but there's a lot of debt associated with running a random number generator on your codebase.

replies(1): >>didget+b8
5. grufko+35[view] [source] 2025-05-14 20:43:18
>>tippyt+(OP)
Working as an instructor for a project course for first-year university students, I have run in to this a couple of times. The code required for the project is pretty simple, but there are a couple of subtle details that can go wrong. Had one group today with bit shifts and other "advanced" operators everywhere, but the code was not working as expected. I asked them to just `Serial.println()` so they could check what was going on, and they were stumped. LLMs are already great tools, but if you don't know basic troubleshooting/debugging you're in for a bad time when the brick wall arrives.

On the other hand, it shows how much coding is just repetition. You don't need to be a good coder to perform serviceable work, but you won't create anything new and amazing either, if you don't learn to think and reason - but that might for some purposes be fine. (Worrying for the ability of the general population however)

You could ask whether these students would have gotten anything done without generated code? Probably, it's just a momentarily easier alternative to actual understanding. They did however realise the problem and decided by themselves to write their own code in a simpler, more repetitive and "stupid" style, but one that they could reason about. So hopefully a good lesson and all well in the end!

replies(1): >>tippyt+tm
6. martin+z5[view] [source] 2025-05-14 20:45:58
>>tippyt+(OP)
> I tell it the secret simple thing it’s missing and it gets it.

Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.

Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.

replies(3): >>ben_w+z7 >>Suppaf+Yd >>tippyt+zi
◧◩
7. ben_w+z7[view] [source] [discussion] 2025-05-14 20:58:16
>>martin+z5
Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...

replies(2): >>Schema+EI >>martin+A62
◧◩
8. didget+b8[view] [source] [discussion] 2025-05-14 21:01:43
>>iotku+61
In my opinion, the difference between good code and code that simply works (sometimes barely); is that good code will still work (or error out gracefully) when the state and the inputs are not as expected.

Good programs are written by people who anticipate what might go wrong. If the document says 'don't do X'; they know a tester is likely to try X because a user will eventually do it.

replies(1): >>altern+tc2
◧◩
9. Suppaf+Yd[view] [source] [discussion] 2025-05-14 21:45:54
>>martin+z5
>Anthropomorphizing LLMs is not helpful

It's a feature of language to describe things in those terms even if they aren't accurate.

>using terms which better describe how it really works is important

Sometimes, especially if you doing something where that matters, but abstracting those details away is also useful when trying to communicate clearly in other contexts.

◧◩
10. tippyt+zi[view] [source] [discussion] 2025-05-14 22:26:33
>>martin+z5
Patronizing much?
◧◩
11. tippyt+tm[view] [source] [discussion] 2025-05-14 22:59:14
>>grufko+35
Sounds like you found a good problem for the students. Having the experience of failing to get the right answer out of the tool and then succeeding on your whits creates an opportunity to learn these tools benefit from disciplined usage.
◧◩◪
12. Schema+EI[view] [source] [discussion] 2025-05-15 03:08:53
>>ben_w+z7
Aside from just getting more useful responses back, I think it's just bad for your brain to treat something that acts like a person with disrespect. Becomes "it's just a chatbot", "It's just a dog", "It's just a low level customer support worker".
replies(1): >>ben_w+F21
◧◩◪◨
13. ben_w+F21[view] [source] [discussion] 2025-05-15 07:30:10
>>Schema+EI
While I also agree with you on that, there are also prompts that make them not act like a person at all, and prompts can be write-once-use-many which lessens the impact of that.

This is why I tend to lead with the "quality of response" argument rather than the "user's own mind" argument.

◧◩◪
14. martin+A62[view] [source] [discussion] 2025-05-15 16:40:21
>>ben_w+z7
I am not talking about getting it to generate useful output, treating it extra politely or threatening with fines seems to give better results sometimes so why not, I am talking about the phrase "gets it". It does not get anything.
◧◩◪
15. altern+tc2[view] [source] [discussion] 2025-05-15 17:14:18
>>didget+b8
I feel like you're talking about programs here rather than code. A program that behaves well is not necessarily built with good code.

I can see an LLM producing a good program with terrible code that's hard to grok and adjust.

[go to top]