zlacker

[parent] [thread] 5 comments
1. quotem+(OP)[view] [source] 2025-08-21 19:28:50
It. doesn't. matter.

The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?

Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?

No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.

replies(5): >>recurs+D >>blaufu+61 >>devmor+42 >>nullc+hg >>eschat+2p
2. recurs+D[view] [source] 2025-08-21 19:32:22
>>quotem+(OP)
Sometimes you can tell.
3. blaufu+61[view] [source] 2025-08-21 19:35:06
>>quotem+(OP)
> Will it lighten maintainer load?

Yes that is the stated purpose, did you read the linked GitHub comment? The author lays out their points pretty well, you sound unreasonably upset about this. Are you submitting a lot of AI slop PRs or something?

P.S Talking. Like. This. Is. Really. Ineffective. It. Makes. Me. Just. Want. To. Disregard. Your. Point. Out. Of. Hand.

4. devmor+42[view] [source] 2025-08-21 19:40:07
>>quotem+(OP)
There are plenty of argumentative and opinionated reasons to say it matters, but there is one that can't really be denied - reviewers (and project maintainers, even if they aren't reviewers) are people whose time deserves to be respected.

If this rule discourages low quality PRs or allows reviewers to save time by prioritizing some non-AI-generated PRs, then it certainly seems useful in my opinion.

5. nullc+hg[view] [source] 2025-08-21 20:57:53
>>quotem+(OP)
The utility of the rule is so that you can cheaply nuke non-conforming contributors from orbit when you detect their undisclosed AI use. Vs having to deal with the flood of low quality contributions on a individually reviewed basis.
6. eschat+2p[view] [source] 2025-08-21 21:54:13
>>quotem+(OP)
You’re basically saying “if a rule can be broken, it will be, therefore rules are useless.”

If someone really wants to commit fraud they’re going to commit fraud. (For example, by not disclosing AI use when a repository requires it.) But if their fraud is discovered, they can still be punished for it, and mitigating actions taken. That’s not nothing, and does actually do a lot to prevent people from engaging in such fraud in the first place.

[go to top]