This tradeoff of unfamiliarity with the codebase is a very well understood problem for decades. Maintaining a project is 99% of the time spent on a successful project.
In my opinion though, having AI write the initial code is just putting most people in a worse situation with almost no upside long term.
(And, if you like, do as TFA says and rephrase the code into your own house style as you’re transcribing it. It’ll be better for it, and you’ll be mentally parsing the code you’re copying at a deeper level.)
I don’t know. Ever had the experience of looking at 5+ year old code and thinking “what idiot wrote this crap” and then checking “git blame” and realising “oh, I’m the idiot… why the hell did I do this? struggling to remember” - given enough time, humans start to forget why they did things a certain way… and sometimes the answer is simply “I didn’t know any better at the time, I do now”
> You don't get that with AI. The codebase is always new.
It depends on how you use AI… e.g. I will often ask an AI to write me code to do X because it gets me over the “hump” of getting started… but now this code is in front of me on the screen, I think “I don’t like how this code is written, I’m going to refactor it…” and by the time I’m done it is more my code than the AI’s
< When using GenAI tools, the effort invested in critical thinking
< shifts from information gathering to information verification;
< from problem-solving to AI response integration; and from task
< execution to task stewardship.
[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...That's dreadful. Not only is familiarity with the code not valued, it is impossible to build for your own sake/sanity.
Writing code is easier than long term maintenance. Any programmer is able to write so much code that he will not be able to maintain it. Unless there are good AI tools helping with maintenance there is no point to use generative tools for production code. From my experience AI tools are great for prototyping or optimizing procrastination.
I'm amazed that people don't see this. Absolutely nobody would claim that copying a novel is the same thing as writing a novel.
Shipping the org chart isn't the only way to solve this problem but it is one that can work. But if you don't acknowledge the relationship between those problems, AGI itself probably isn't going to help (partially sarcastic).
You are now responsible for learning how to use LLMs well. If an untrained vibe coder is more productive for me, while knowing nothing about how the code actually works, I will hire the vibe coder instead of you.
Learning is important, but it's most important that you learn how to use the best tools available so you can be productive. LLMs are not going away and they will only get better, so today that means you are responsible for learning how to use them, and that is already more important for most many roles than learning how to code yourself.
Nevermind other important values like resilience, adaptability, reliability, and scrutability. An AI writes a function foo() that does a thing correctly; who has the know-how that can figure out if foo() kills batteries, or under what conditions it could contribute to an ARP storm or disk thrashing, or what implicit hardware requirements it has?
How well does your team work when you can't even answer a simple question about your system because nobody wrote, tested, played with the code in question?
How do you answer "Is it possible for our system to support split payments?" when not a single member of your team has even worked on the billing code?
No, code reviews do not familiarize an average dev to the level of understanding the code in question.
Which: of course you can. Not least because both your coworkers and these coding agents produce changes with explanatory comments on any lines for which the justification or logic is non-obvious; but also because — AI PR or otherwise — the PR consists of commits, and those commits have commit messages further explaining them. And — AI submitter or otherwise — you can interrogate the PR’s submitter in the PR’s associated discussion thread, asking the submitter to justify the decisions made, explain parts you’re suspicious of, etc.
When you think about it, presuming your average FOSS project with an open contribution model, a PR from an AI agent is going to be at least strictly more “knowable” than a “drive-by” PR by an external one-time contributor who doesn’t respond to discussion-thread messages. (And sure, that’s a low bar — but it’s one that the average accepted and merged contribution in many smaller projects doesn’t even clear!)
Back to the novel analogy, you could ask an author why he incorporated this or that character trait or plot point, but all the explanation in the world will not make you able to write the next chapter as well as he could.
- git blame
- always write good commit messages
yes!
> somehow
not difficult to explain. Coding is a creative activity where you work top-down; you decompose the abstract/high-level into the detailed/low-level. You dictate the "events" happening to the code, you are in control. Reviewing is reactive; the code you review dictates what happens in your brain (you are under control, not in control), and you need to work bottom-up: you try to re-compose the whole from the fragments. Even for human coders, a detailed commit message is a pre-requisite before their code can undergo review. The reviewer is in the worse position, so he needs to be supported as much as possible.
I'll have the look into this some more but I'm very curious about what the current state of the art is. I'm guessing it's not great because so few people do this in the first place -- because it's so tedious -- and there's probably not nearly enough training data for it to be practical to generate specs for a JavaScript GQL app or whatever these things are best at generating.
This type of issue is part of why I've never felt the appeal of LLMs, I want to understand my code because it came from my brain and my understanding, or the same said of a teammate who I can then ask questions when I don't understand something.
This is my current role, and one of the biggest reasons AI doesn't really help me day to day agent or otherwise.
In my ideal world, AI become so proficient at writing code that they eventually develop their own formally verifiable programming language, purpose built to be verifiable. So that there wouldn't be room for unknown unknowns.
I find myself both:
- writing a comment so that Copilot knows what to do
- letting Copilot write my comment when it knows what I did
I'm now a lot more reliable with my comment writing.
Apparently, he actually meant this as a somewhat serious piece of writing advice, but I still prefer my initial reading of it as sarcasm.