zlacker

[return to "My AI skeptic friends are all nuts"]
1. jszymb+JM[view] [source] 2025-06-03 03:48:33
>>tablet+(OP)
The argument that I've heard against LLMs for code is that they create bugs that, by design, are very difficult to spot.

The LLM has one job, to make code that looks plausible. That's it. There's no logic gone into writing that bit of code. So the bugs often won't be like those a programmer makes. Instead, they can introduce a whole new class of bug that's way harder to debug.

◧◩
2. mindwo+TN[view] [source] 2025-06-03 04:05:29
>>jszymb+JM
This is a misunderstanding. Modern LLMs are trained with RL to actually write good programs. They aren't just spewing tokens out.
◧◩◪
3. godels+0S[view] [source] 2025-06-03 04:50:30
>>mindwo+TN
No, YOU misunderstand. This isn't a thing RL can fix

  https://news.ycombinator.com/item?id=44163194

  https://news.ycombinator.com/item?id=44068943
It doesn't optimize "good programs". It interprets "humans interpretation of good programs." More accurately, "it optimizes what low paid over worked humans believe are good programs." Are you hiring your best and brightest to code review the LLMs?

Even if you do, it still optimizes tricking them. It will also optimize writing good programs, but you act like that's a well defined and measurable thing.

◧◩◪◨
4. mindwo+tT[view] [source] 2025-06-03 05:11:04
>>godels+0S
This is just semantics. What's the difference between a "human interpretation of a good program" and a "good program" when we (humans) are the ones using it? If the model can write code that passes tests, and meets my requirements, then it's a good programmer. I would expect nothing more or less out of a human programmer.
◧◩◪◨⬒
5. godels+011[view] [source] 2025-06-03 06:25:20
>>mindwo+tT
Is your grandma qualified to determine what is good code?

  > If the model can write code that passes tests
You think tests make code good? Oh my sweet summer child. TDD has been tried many times and each time it failed worse than the last.
◧◩◪◨⬒⬓
6. pydry+Aa1[view] [source] 2025-06-03 08:00:51
>>godels+011
Good to know something i've been doing for 10 years consistently could never work.
◧◩◪◨⬒⬓⬔
7. godels+bl1[view] [source] 2025-06-03 09:58:49
>>pydry+Aa1
It's okay, lots of people's code is always buggy. I know people that suck at coding and have been doing it for 50 years. It's not uncommon

I'm not saying don't make tests. But I am saying you're not omniscient. Until you are, your tests are going to be incomplete. They are helpful guides, but they should not drive development. If you really think you can test for every bug then I suggest you apply to be Secretary for health.

https://hackernoon.com/test-driven-development-is-fundamenta...

https://geometrian.com/projects/blog/test_driven_development...

◧◩◪◨⬒⬓⬔⧯
8. Dylan1+HT2[view] [source] 2025-06-03 20:14:06
>>godels+bl1
> It's okay, lots of people's code is always buggy. I know people that suck at coding and have been doing it for 50 years. It's not uncommon

Are you saying you're better than that? If you think you're next to perfect then I understand why you're so against the idea that an imperfect LLM could still generate pretty good code. But also you're wrong if you think you're next to perfect.

If you're not being super haughty, then I don't understand your complaints against LLMs. You seem to be arguing they're not useful because they make mistakes. But humans make mistakes while being useful. If the rate is below some line, isn't the output still good?

[go to top]