zlacker

[parent] [thread] 4 comments
1. belter+(OP)[view] [source] 2025-06-02 15:22:50
The million-dollar question is not whether you can review at the speed the model is coding. It is whether you can trust review alone to catch everything.

If a robot assembles cars at lightning speed... but occasionally misaligns a bolt, and your only safeguard is a visual inspection afterward, some defects will roll off the assembly line. Human coders prevent many bugs by thinking during assembly.

replies(3): >>chrisw+V3 >>pton_x+td >>Shorn+Yn1
2. chrisw+V3[view] [source] 2025-06-02 15:45:27
>>belter+(OP)
THIS.

IMHO more rigorous test automation (including fuzzing and related techniques) is needed. Actually that holds whether AI is involved or not, but probably more so if it is.

3. pton_x+td[view] [source] 2025-06-02 16:41:35
>>belter+(OP)
> Human coders prevent many bugs by thinking during assembly.

I'm far from an AI true believer but come on -- human coders write bugs, tons and tons of bugs. According to Peopleware, software has "an average defect density of one to three defects per hundred lines of code"!

replies(1): >>belter+Gy2
4. Shorn+Yn1[view] [source] 2025-06-03 00:59:40
>>belter+(OP)
And yet, doors still fall off airplanes without any AI in sight.
◧◩
5. belter+Gy2[view] [source] [discussion] 2025-06-03 13:08:14
>>pton_x+td
My point is that the bugs generated by LLM or human coders are different.
[go to top]