zlacker

[return to "My AI skeptic friends are all nuts"]
1. pie_fl+33[view] [source] 2025-06-02 21:28:54
>>tablet+(OP)
I have one very specific retort to the 'you are still responsible' point. High school kids write lots of notes. The notes frequently never get read, but the performance is worse without them: the act of writing them embeds them into your head. I allegedly know how to use a debugger, but I haven't in years: but for a number I could count on my fingers, nearly every bug report I have gotten I know exactly down to the line of code where it comes from, because I wrote it or something next to it (or can immediately ask someone who probably did). You don't get that with AI. The codebase is always new. Everything must be investigated carefully. When stuff slips through code review, even if it is a mistake you might have made, you would remember that you made it. When humans do not do the work, humans do not accrue the experience. (This may still be a good tradeoff, I haven't run any numbers. But it's not such an obvious tradeoff as TFA implies.)
◧◩
2. derefr+k4[view] [source] 2025-06-02 21:35:44
>>pie_fl+33
So do the thing that a student copying their notes from the board does: look at the PR on one monitor, and write your own equivalent PR by typing the changes line-for-line into your IDE on the other. Pretend copy/paste doesn’t exist. Pretend it’s code you saw in a YouTube video of a PowerPoint presentation, or a BASIC listing from one of those 1980s computing magazines.

(And, if you like, do as TFA says and rephrase the code into your own house style as you’re transcribing it. It’ll be better for it, and you’ll be mentally parsing the code you’re copying at a deeper level.)

◧◩◪
3. roarch+b6[view] [source] 2025-06-02 21:46:16
>>derefr+k4
You still didn't have to build the mental model, understand the subtle tradeoffs and make the decisions that arrived at that design.

I'm amazed that people don't see this. Absolutely nobody would claim that copying a novel is the same thing as writing a novel.

◧◩◪◨
4. derefr+Uh[view] [source] 2025-06-02 22:57:10
>>roarch+b6
I am suspicious of this argument, because it would imply that you can’t understand the design intent / tradeoffs / etc of code written by your own coworkers.

Which: of course you can. Not least because both your coworkers and these coding agents produce changes with explanatory comments on any lines for which the justification or logic is non-obvious; but also because — AI PR or otherwise — the PR consists of commits, and those commits have commit messages further explaining them. And — AI submitter or otherwise — you can interrogate the PR’s submitter in the PR’s associated discussion thread, asking the submitter to justify the decisions made, explain parts you’re suspicious of, etc.

When you think about it, presuming your average FOSS project with an open contribution model, a PR from an AI agent is going to be at least strictly more “knowable” than a “drive-by” PR by an external one-time contributor who doesn’t respond to discussion-thread messages. (And sure, that’s a low bar — but it’s one that the average accepted and merged contribution in many smaller projects doesn’t even clear!)

[go to top]