zlacker

[parent] [thread] 37 comments
1. pie_fl+(OP)[view] [source] 2025-06-02 21:28:54
I have one very specific retort to the 'you are still responsible' point. High school kids write lots of notes. The notes frequently never get read, but the performance is worse without them: the act of writing them embeds them into your head. I allegedly know how to use a debugger, but I haven't in years: but for a number I could count on my fingers, nearly every bug report I have gotten I know exactly down to the line of code where it comes from, because I wrote it or something next to it (or can immediately ask someone who probably did). You don't get that with AI. The codebase is always new. Everything must be investigated carefully. When stuff slips through code review, even if it is a mistake you might have made, you would remember that you made it. When humans do not do the work, humans do not accrue the experience. (This may still be a good tradeoff, I haven't run any numbers. But it's not such an obvious tradeoff as TFA implies.)
replies(13): >>sublin+c1 >>derefr+h1 >>tablet+o1 >>skissa+z1 >>stock_+C1 >>ezst+u2 >>kubav0+H2 >>JoshTr+e3 >>mgracz+J5 >>Mofpof+yh >>Schema+yl >>0x1ceb+tJ >>xnx+IS1
2. sublin+c1[view] [source] 2025-06-02 21:35:12
>>pie_fl+(OP)
I have to completely agree with this and nobody says this enough.

This tradeoff of unfamiliarity with the codebase is a very well understood problem for decades. Maintaining a project is 99% of the time spent on a successful project.

In my opinion though, having AI write the initial code is just putting most people in a worse situation with almost no upside long term.

replies(1): >>Curren+s5
3. derefr+h1[view] [source] 2025-06-02 21:35:44
>>pie_fl+(OP)
So do the thing that a student copying their notes from the board does: look at the PR on one monitor, and write your own equivalent PR by typing the changes line-for-line into your IDE on the other. Pretend copy/paste doesn’t exist. Pretend it’s code you saw in a YouTube video of a PowerPoint presentation, or a BASIC listing from one of those 1980s computing magazines.

(And, if you like, do as TFA says and rephrase the code into your own house style as you’re transcribing it. It’ll be better for it, and you’ll be mentally parsing the code you’re copying at a deeper level.)

replies(3): >>galley+b2 >>chaps+i2 >>roarch+83
4. tablet+o1[view] [source] 2025-06-02 21:36:08
>>pie_fl+(OP)
This level of knowledge is nearly impossible to maintain as the codebase grows though, beyond one or two people at a typical company. And tools need to exist for the new hire as well as the long-standing employee.
replies(1): >>ezst+r3
5. skissa+z1[view] [source] 2025-06-02 21:37:36
>>pie_fl+(OP)
> When stuff slips through code review, even if it is a mistake you might have made, you would remember that you made it.

I don’t know. Ever had the experience of looking at 5+ year old code and thinking “what idiot wrote this crap” and then checking “git blame” and realising “oh, I’m the idiot… why the hell did I do this? struggling to remember” - given enough time, humans start to forget why they did things a certain way… and sometimes the answer is simply “I didn’t know any better at the time, I do now”

> You don't get that with AI. The codebase is always new.

It depends on how you use AI… e.g. I will often ask an AI to write me code to do X because it gets me over the “hump” of getting started… but now this code is in front of me on the screen, I think “I don’t like how this code is written, I’m going to refactor it…” and by the time I’m done it is more my code than the AI’s

replies(4): >>Legion+t6 >>mrguyo+Q8 >>Mofpof+Oh >>BirAda+nr2
6. stock_+C1[view] [source] 2025-06-02 21:37:56
>>pie_fl+(OP)
I read a study[1] (caveat, not peer reviewed yet I don't think?) that seems to imply that you are correct.

  < When using GenAI tools, the effort invested in critical thinking
  < shifts from information gathering to information verification; 
  < from problem-solving to AI response integration; and from task
  < execution to task stewardship.
[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
replies(1): >>wayvey+Z5
◧◩
7. galley+b2[view] [source] [discussion] 2025-06-02 21:40:51
>>derefr+h1
Is this just repeating labor? Why not just write it all yourself in the first place if you are just going to need to copy it over later?
◧◩
8. chaps+i2[view] [source] [discussion] 2025-06-02 21:41:26
>>derefr+h1
This is how a (video game) programming class in my high school was taught. You had to transcribe the code from a Digipen book.... then fix any broken code. Not entirely sure if their many typos were intentional, but they very much helped learn because we had no choice but to correct their logic failures and taypos to move onto the next section. I'm still surprised 20 years later how well that system worked to teach and push us to branch our understandings.
replies(1): >>SirHum+67
9. ezst+u2[view] [source] 2025-06-02 21:42:22
>>pie_fl+(OP)
> The codebase is always new. Everything must be investigated carefully.

That's dreadful. Not only is familiarity with the code not valued, it is impossible to build for your own sake/sanity.

10. kubav0+H2[view] [source] 2025-06-02 21:43:33
>>pie_fl+(OP)
+1

Writing code is easier than long term maintenance. Any programmer is able to write so much code that he will not be able to maintain it. Unless there are good AI tools helping with maintenance there is no point to use generative tools for production code. From my experience AI tools are great for prototyping or optimizing procrastination.

◧◩
11. roarch+83[view] [source] [discussion] 2025-06-02 21:46:16
>>derefr+h1
You still didn't have to build the mental model, understand the subtle tradeoffs and make the decisions that arrived at that design.

I'm amazed that people don't see this. Absolutely nobody would claim that copying a novel is the same thing as writing a novel.

replies(3): >>the_sn+M6 >>derefr+Re >>Anamon+5bd
12. JoshTr+e3[view] [source] 2025-06-02 21:46:41
>>pie_fl+(OP)
Exactly. See also https://hazelweakly.me/blog/stop-building-ai-tools-backwards... for a detailed look at this aspect of AI coding.
◧◩
13. ezst+r3[view] [source] [discussion] 2025-06-02 21:47:36
>>tablet+o1
Welcome to project architecting, where the job isn't about putting more lines of code into this world, but more systems in place to track them. A well layered and structured codebase can grow for a very long time before it becomes too hard to maintain. And generally, the business complexity bites before the algorithmic one, and there's no quick fix for that.
replies(1): >>throw_+f5
◧◩◪
14. throw_+f5[view] [source] [discussion] 2025-06-02 21:58:53
>>ezst+r3
It's cultural too. I've heard people say along the lines "we don't ship the org chart here" in a positive light, then in a later meeting complain that nobody understands what's going on in their owner-less monorepo.

Shipping the org chart isn't the only way to solve this problem but it is one that can work. But if you don't acknowledge the relationship between those problems, AGI itself probably isn't going to help (partially sarcastic).

◧◩
15. Curren+s5[view] [source] [discussion] 2025-06-02 22:00:06
>>sublin+c1
I agree I'm bullish on AI for coding generally, but I am curious how they'd get around this problem. Even if they can code at super human level, then you just get rarer super human bugs. Or is another AI going to debug it? Unless this loop is basically fail proof, does the human's job just becoming debugging the hardest things to debug (or at least a blindspot of the AI)
replies(3): >>ethagn+5x >>runeva+oA >>the_sl+jB
16. mgracz+J5[view] [source] 2025-06-02 22:01:58
>>pie_fl+(OP)
The important thing you are missing is that the learning landscape has now changed.

You are now responsible for learning how to use LLMs well. If an untrained vibe coder is more productive for me, while knowing nothing about how the code actually works, I will hire the vibe coder instead of you.

Learning is important, but it's most important that you learn how to use the best tools available so you can be productive. LLMs are not going away and they will only get better, so today that means you are responsible for learning how to use them, and that is already more important for most many roles than learning how to code yourself.

replies(1): >>Mofpof+6j
◧◩
17. wayvey+Z5[view] [source] [discussion] 2025-06-02 22:04:03
>>stock_+C1
This is a good point I think, and these steps take time and should definitely be done. I'm not sure people take this into account when talking about having AI code for them.
◧◩
18. Legion+t6[view] [source] [discussion] 2025-06-02 22:06:45
>>skissa+z1
Oddly, I don't tend to get that experience very much. More often, it's "That's not how I'd naively write that code, there must be some catch to it. If only I had the foresight to write a comment about it..." Alas, I'm still not very good at writing enough comments.
replies(1): >>Occams+lE
◧◩◪
19. the_sn+M6[view] [source] [discussion] 2025-06-02 22:08:30
>>roarch+83
I feel like the dismissal of mental models is a direct consequence on the tech industry's manaical focus on scale and efficiency as the end-all be-all values to optimize.

Nevermind other important values like resilience, adaptability, reliability, and scrutability. An AI writes a function foo() that does a thing correctly; who has the know-how that can figure out if foo() kills batteries, or under what conditions it could contribute to an ARP storm or disk thrashing, or what implicit hardware requirements it has?

◧◩◪
20. SirHum+67[view] [source] [discussion] 2025-06-02 22:10:36
>>chaps+i2
Yes, I was just about to say. Typing out code is a way to lear syntax of a new language and it’s often recommended to not copy paste while you start learning.
◧◩
21. mrguyo+Q8[view] [source] [discussion] 2025-06-02 22:22:06
>>skissa+z1
Understanding code takes more effort than writing it, somehow. That's always been a huge problem in the industry, because code you wrote five years ago was written by someone else, but AI coding takes that from "all code in your org except the code you wrote in the past couple years" to "all code was written by someone else".

How well does your team work when you can't even answer a simple question about your system because nobody wrote, tested, played with the code in question?

How do you answer "Is it possible for our system to support split payments?" when not a single member of your team has even worked on the billing code?

No, code reviews do not familiarize an average dev to the level of understanding the code in question.

replies(1): >>Mofpof+qi
◧◩◪
22. derefr+Re[view] [source] [discussion] 2025-06-02 22:57:10
>>roarch+83
I am suspicious of this argument, because it would imply that you can’t understand the design intent / tradeoffs / etc of code written by your own coworkers.

Which: of course you can. Not least because both your coworkers and these coding agents produce changes with explanatory comments on any lines for which the justification or logic is non-obvious; but also because — AI PR or otherwise — the PR consists of commits, and those commits have commit messages further explaining them. And — AI submitter or otherwise — you can interrogate the PR’s submitter in the PR’s associated discussion thread, asking the submitter to justify the decisions made, explain parts you’re suspicious of, etc.

When you think about it, presuming your average FOSS project with an open contribution model, a PR from an AI agent is going to be at least strictly more “knowable” than a “drive-by” PR by an external one-time contributor who doesn’t respond to discussion-thread messages. (And sure, that’s a low bar — but it’s one that the average accepted and merged contribution in many smaller projects doesn’t even clear!)

replies(1): >>roarch+wh
◧◩◪◨
23. roarch+wh[view] [source] [discussion] 2025-06-02 23:15:08
>>derefr+Re
You understand your coworkers' PRs as thoroughly and intuitively as they do? Any significant PR will contain things you don't even notice or think to ask about. And the answers to the questions you do ask are the end result of a thought process you didn't go through and therefore also don't understand as deeply.

Back to the novel analogy, you could ask an author why he incorporated this or that character trait or plot point, but all the explanation in the world will not make you able to write the next chapter as well as he could.

24. Mofpof+yh[view] [source] 2025-06-02 23:15:15
>>pie_fl+(OP)
This is it. Reading AI slop does not form synapses in your brain like writing the code yourself does.
◧◩
25. Mofpof+Oh[view] [source] [discussion] 2025-06-02 23:16:46
>>skissa+z1
> why the hell did I do this? struggling to remember

- git blame

- always write good commit messages

◧◩◪
26. Mofpof+qi[view] [source] [discussion] 2025-06-02 23:20:34
>>mrguyo+Q8
> Understanding code takes more effort than writing it

yes!

> somehow

not difficult to explain. Coding is a creative activity where you work top-down; you decompose the abstract/high-level into the detailed/low-level. You dictate the "events" happening to the code, you are in control. Reviewing is reactive; the code you review dictates what happens in your brain (you are under control, not in control), and you need to work bottom-up: you try to re-compose the whole from the fragments. Even for human coders, a detailed commit message is a pre-requisite before their code can undergo review. The reviewer is in the worse position, so he needs to be supported as much as possible.

◧◩
27. Mofpof+6j[view] [source] [discussion] 2025-06-02 23:26:13
>>mgracz+J5
This is actually a good reason for exiting the industry before one's job goes away. Steering AI to barf up the right-looking pool of vomit is not the Flow-generating experience that many people have started to program for.
replies(1): >>Gensho+Ih1
28. Schema+yl[view] [source] 2025-06-02 23:42:57
>>pie_fl+(OP)
Similar to almost self driving cars where you are still responsible. You're asking someone to do nothing at all other than being highly alert for long periods of time. That's just not how people work. There is no way someone can be ready to take over in an instant without actively engaging in the driving.
◧◩◪
29. ethagn+5x[view] [source] [discussion] 2025-06-03 01:26:48
>>Curren+s5
I haven't seen enough mention of using these tools to generate formal verification specs for their output, like TLA+. Of course, you're stuck with the same problem of having to verify the specs but you'll always be playing this game and it'd seem like this would be one of best, most reassuring ways to do so.

I'll have the look into this some more but I'm very curious about what the current state of the art is. I'm guessing it's not great because so few people do this in the first place -- because it's so tedious -- and there's probably not nearly enough training data for it to be practical to generate specs for a JavaScript GQL app or whatever these things are best at generating.

◧◩◪
30. runeva+oA[view] [source] [discussion] 2025-06-03 01:59:13
>>Curren+s5
This comment reminds me of the old idiom (I cannot remember who is credited with it) that you should be careful not to use your full abilities writing code, because you have to be more clever to debug code than you were to write it.

This type of issue is part of why I've never felt the appeal of LLMs, I want to understand my code because it came from my brain and my understanding, or the same said of a teammate who I can then ask questions when I don't understand something.

replies(1): >>mining+Yh1
◧◩◪
31. the_sl+jB[view] [source] [discussion] 2025-06-03 02:10:52
>>Curren+s5
> becoming debugging the hardest things to debug

This is my current role, and one of the biggest reasons AI doesn't really help me day to day agent or otherwise.

In my ideal world, AI become so proficient at writing code that they eventually develop their own formally verifiable programming language, purpose built to be verifiable. So that there wouldn't be room for unknown unknowns.

◧◩◪
32. Occams+lE[view] [source] [discussion] 2025-06-03 02:41:44
>>Legion+t6
Now this is where AI assisted coding shines in my opinion.

I find myself both:

- writing a comment so that Copilot knows what to do

- letting Copilot write my comment when it knows what I did

I'm now a lot more reliable with my comment writing.

33. 0x1ceb+tJ[view] [source] 2025-06-03 03:44:07
>>pie_fl+(OP)
Talking to an LLM feels like talking to Leonard Shelby from memento. https://youtube.com/watch?v=Y3kNTvXVHvQ
◧◩◪
34. Gensho+Ih1[view] [source] [discussion] 2025-06-03 09:54:44
>>Mofpof+6j
There is room to move up the developer hierarchy at the company I work for, but I refuse to take that path for this very reason. The leadership has bought into AI as some kind of panacea, plus the company's plans to replace hundreds of human administrators in our B2C operations with AI strike me as downright evil.
◧◩◪◨
35. mining+Yh1[view] [source] [discussion] 2025-06-03 09:57:52
>>runeva+oA
I believe it was Brian Kernigan
36. xnx+IS1[view] [source] 2025-06-03 14:25:29
>>pie_fl+(OP)
AI tools are enabling the developer to a higher level of abstraction to engineering manager or product manager. Those roles do not need to be familiar with code in that detail.
◧◩
37. BirAda+nr2[view] [source] [discussion] 2025-06-03 17:47:28
>>skissa+z1
5 years? You’re a genius. I can’t make sense of stuff I wrote last week.
◧◩◪
38. Anamon+5bd[view] [source] [discussion] 2025-06-08 01:24:52
>>roarch+83
Hunter S. Thompson claimed to have re-typed The Great Gatsby because he wanted to know what it feels like to write a great novel.

Apparently, he actually meant this as a somewhat serious piece of writing advice, but I still prefer my initial reading of it as sarcasm.

[go to top]