Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.
On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.
However:
> AI generated code does not substitute human thinking, testing, and clean up/rewrite.
Isn't that the end goal of these tools and companies producing them?
According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.
Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.
It seems to be the goal. But they seem very far away from achieving that goal.
One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.
You may hire a genius developer that's better than you at everything, and you still won't trust them blindly with work you are responsible for. In fact, the smarter they are than you, the less trusting you can afford to be.
It's one of those provisions that seem reasonable, but really have no justification. It's an attempt to allow something, while extracting a cost. If I am responsible for my code, and am considered the author in the PR, than you as the recipient don't have a greater interest to know than my own personal preference not to disclose. There's never been any other requirement to disclose anything of this nature before. We don't require engineers to attest to the operating system or the licensing of the tools they use, so materially outside your own purant interests, how does it matter?
It is of course your responsibility, but the maintainer may also want to change their review approach when dealing with AI generated code. And currently, as the AI Usage Policy also states, because of bad actors sending pull requests without reviewing or taking the responsibility themselves, this acts as a filter to separate your PR which you have taken the responsibility for.