zlacker

[parent] [thread] 8 comments
1. voxl+(OP)[view] [source] 2025-08-21 19:21:04
Except you can enforce this rule some of the time. People discover that AI was used or suspect it all the time, and people admit to it after some pressure all the time.

Not every time, but sometimes. The threat of being caught isn't meaningless. You can decide not to play in someone else's walled garden if you want but the least you can do is respect their rules, bare minimum of human decency.

replies(2): >>quotem+e1 >>pixl97+Ih
2. quotem+e1[view] [source] 2025-08-21 19:28:50
>>voxl+(OP)
It. doesn't. matter.

The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?

Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?

No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.

replies(5): >>recurs+R1 >>blaufu+k2 >>devmor+i3 >>nullc+vh >>eschat+gq
◧◩
3. recurs+R1[view] [source] [discussion] 2025-08-21 19:32:22
>>quotem+e1
Sometimes you can tell.
◧◩
4. blaufu+k2[view] [source] [discussion] 2025-08-21 19:35:06
>>quotem+e1
> Will it lighten maintainer load?

Yes that is the stated purpose, did you read the linked GitHub comment? The author lays out their points pretty well, you sound unreasonably upset about this. Are you submitting a lot of AI slop PRs or something?

P.S Talking. Like. This. Is. Really. Ineffective. It. Makes. Me. Just. Want. To. Disregard. Your. Point. Out. Of. Hand.

◧◩
5. devmor+i3[view] [source] [discussion] 2025-08-21 19:40:07
>>quotem+e1
There are plenty of argumentative and opinionated reasons to say it matters, but there is one that can't really be denied - reviewers (and project maintainers, even if they aren't reviewers) are people whose time deserves to be respected.

If this rule discourages low quality PRs or allows reviewers to save time by prioritizing some non-AI-generated PRs, then it certainly seems useful in my opinion.

◧◩
6. nullc+vh[view] [source] [discussion] 2025-08-21 20:57:53
>>quotem+e1
The utility of the rule is so that you can cheaply nuke non-conforming contributors from orbit when you detect their undisclosed AI use. Vs having to deal with the flood of low quality contributions on a individually reviewed basis.
7. pixl97+Ih[view] [source] 2025-08-21 20:58:49
>>voxl+(OP)
Except the other way happens too.

You get someone that didn't use AI getting accused of using AI and eventually telling people to screw off and contributing nothing.

replies(1): >>nullc+Eb1
◧◩
8. eschat+gq[view] [source] [discussion] 2025-08-21 21:54:13
>>quotem+e1
You’re basically saying “if a rule can be broken, it will be, therefore rules are useless.”

If someone really wants to commit fraud they’re going to commit fraud. (For example, by not disclosing AI use when a repository requires it.) But if their fraud is discovered, they can still be punished for it, and mitigating actions taken. That’s not nothing, and does actually do a lot to prevent people from engaging in such fraud in the first place.

◧◩
9. nullc+Eb1[view] [source] [discussion] 2025-08-22 06:08:54
>>pixl97+Ih
If their work was difficult to distinguish from AI then that sounds like a win too.
[go to top]