Not every time, but sometimes. The threat of being caught isn't meaningless. You can decide not to play in someone else's walled garden if you want but the least you can do is respect their rules, bare minimum of human decency.
The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?
Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?
No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.
Yes that is the stated purpose, did you read the linked GitHub comment? The author lays out their points pretty well, you sound unreasonably upset about this. Are you submitting a lot of AI slop PRs or something?
P.S Talking. Like. This. Is. Really. Ineffective. It. Makes. Me. Just. Want. To. Disregard. Your. Point. Out. Of. Hand.
If this rule discourages low quality PRs or allows reviewers to save time by prioritizing some non-AI-generated PRs, then it certainly seems useful in my opinion.
You get someone that didn't use AI getting accused of using AI and eventually telling people to screw off and contributing nothing.
If someone really wants to commit fraud they’re going to commit fraud. (For example, by not disclosing AI use when a repository requires it.) But if their fraud is discovered, they can still be punished for it, and mitigating actions taken. That’s not nothing, and does actually do a lot to prevent people from engaging in such fraud in the first place.