zlacker

[parent] [thread] 3 comments
1. em-bee+(OP)[view] [source] 2026-01-16 08:44:34
In these cases AI writing the code is pure gain.

no, it isn't. unless the generated code is just a few lines long, and all you are doing is effectively autocompletion, you have to go through the generated code with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself.

replies(2): >>corndo+dK >>pixl97+Sq1
2. corndo+dK[view] [source] 2026-01-16 15:22:26
>>em-bee+(OP)
Broadly I agree with you. I think of it in terms of responsibility. Ultimately the commit has my name on it, so I am the responsible party. From that perspective, I do need to "understand" what I am checking in to be reasonably sure it meets my professional standards of quality.

The reason I put scare quotes on "understand" is that we need to acknowledge that there are degrees of understanding, and that different degrees are required in different scenarios. For example, when you call syscall(), how well do you understand what is happening? You understand what's in the manpage; you know that it triggers a switch to kernel space, performs some task, returns some result. Most of us have not read the assembly code, we have a general concept of what is going on but the real understanding pretty much ends at the function call. Yet we check that in because that level of understanding corresponds to the general engineering standard.

In some cases, with AI, you can be reasonably sure the result is correct without deeply understanding it and still meet the bar. The bazel rule example is a good one. I prompt, "take this openapi spec and add build rules to generate bindings from it. Follow existing repo conventions." From my years of engineering experience, I already know what the result should look like, roughly. I skim the generated diff to ensure it matches that expectation; skim the model output to see what it referenced as examples. At that point, what the model produced is probably similar to what I would have produced by spending 30 minutes grepping around, reading build rules, et cetera. For this particular task, the model has saved me that time. I don't need to understand it perfectly. Either the code builds or it doesn't.

For other things, my standard is much higher. For example, models don't save me much time on concurrent code because, in order to meet the quality bar, the level of understanding required is much higher. I do need to sit there, read it, re-read it, chew on the concurrency model, et cetera. Like I said, it's situational.

There are many, many other aspects to quantifying the effects of AI on productivity, code quality is just one aspect. It's very holistic and dependent on you, how you work, what domain you work in, the technologies you work with, the team you work on, so many factors.

3. pixl97+Sq1[view] [source] 2026-01-16 18:24:24
>>em-bee+(OP)
> with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself

so the exact same thing you should be doing in code reviews anyway?

replies(1): >>em-bee+vb3
◧◩
4. em-bee+vb3[view] [source] [discussion] 2026-01-17 09:35:25
>>pixl97+Sq1
kind of, except that when i review a code submission to my project i can eventually learn to trust the submitter, once i realize they write good code. a code review is to develop that trust. AI code should never earn that trust, and any code review should always be treated like it it is from a first time submitter that i have never met before. the risk is that does not happen, and that we believe AI code submissions will develop like those of a real human. they won't. we'll develop a false sense of security, a false sense of trust. instead we should always be on guard.

and as i wrote in my other comment, reviewing the code of a junior developer includes the satisfaction of helping that developer grow through my feedback. AI will never grow. there is no satisfaction in reviewing its code. instead it feels like a sisyphusian task, because the AI will make the same mistakes over and over again, and make mistakes a human would be very unlikely to make. unlike human code with AI code you have to expect the unexpected.

[go to top]