zlacker

[return to "AI Usage Policy"]
1. arjunb+X8[view] [source] 2026-01-23 11:02:26
>>mefeng+(OP)
I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

◧◩
2. ottah+fd2[view] [source] 2026-01-23 22:44:47
>>arjunb+X8
Everything except the first provision is reasonable. IMO it's none of your damn business how I wrote the code, only that I understand it, and am responsible for it.

It's one of those provisions that seem reasonable, but really have no justification. It's an attempt to allow something, while extracting a cost. If I am responsible for my code, and am considered the author in the PR, than you as the recipient don't have a greater interest to know than my own personal preference not to disclose. There's never been any other requirement to disclose anything of this nature before. We don't require engineers to attest to the operating system or the licensing of the tools they use, so materially outside your own purant interests, how does it matter?

[go to top]