zlacker

[return to "AI tooling must be disclosed for contributions"]
1. Waterl+A3[view] [source] 2025-08-21 19:07:52
>>freeto+(OP)
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

◧◩
2. cvoss+56[view] [source] 2025-08-21 19:24:50
>>Waterl+A3
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

◧◩◪
3. otterl+0B[view] [source] 2025-08-21 22:27:54
>>cvoss+56
If it comes with good documentation and appropriate tests, does that help?
◧◩◪◨
4. mattbe+kH[view] [source] 2025-08-21 23:08:27
>>otterl+0B
The observation that inspired this policy is that if you used AI, it is likely you don't know if the code, the documentation or tests are good or appropriate.
◧◩◪◨⬒
5. otterl+qI[view] [source] 2025-08-21 23:17:41
>>mattbe+kH
What if you started with good documentation that you personally wrote, you gave that to the agent, and you verified the tests were appropriate and passed?
◧◩◪◨⬒⬓
6. mattbe+VL[view] [source] 2025-08-21 23:41:26
>>otterl+qI
I'd extrapolate that the OP's view would be: you've still put in less effort, so your PR is less worthy of his attention than someone who'd done the same without using LLMs.

That's a pretty nice offer from one of the most famous and accomplished free software maintainers in the world. He's promising not to take a short-cut reviewing your PR, in exchange for you not taking a short-cut writing it in the first place.

◧◩◪◨⬒⬓⬔
7. otterl+1P[view] [source] 2025-08-22 00:12:45
>>mattbe+VL
> in exchange for you not taking a short-cut writing it in the first place.

This “short cut” language suggests that the quality of the submission is going to be objectively worse by way of its provenance.

Yet, can one reliably distinguish working and tested code generated by a person vs a machine? We’re well past passing Turing tests at this point.

[go to top]