At which it comes back to not allowing anything but the most locked-down clients, and disempowering users... and still failing, bcecause all clients can be turned into spam bots with the most trivial application of autohotkey et al.
- The OS can trivially expose to the app whether events are coming from real hardware or another app, information the app can then either report or not report.
- The attested user-agent string given can be extended to include information about any scripts that are driving it, e.g. script hashes.
And so on. Then these things can have reputations computed over them. If there's a script hash that shows up reliably in spam, and never shows up in ham, then you can auto-mark those posts as spam. If the scripts aren't known then messages can be throttled until enough users have voted on whether the messages are spam or not. All this is fairly straightforward to code up, again, in a theoretical world in which operating systems expose information like whether events are emulated or not (today they don't).
The trick is that clients don't have to be locked down. The tech is fundamentally about letting you prove true statements. Those statements can be as complex as needed to allow whatever level of customization and control is desired. The more malleable clients are the more complex it becomes to determine what is and isn't considered OK, but in a decentralized system that policy complexity is up to the end users themselves to decide. They can share logic in the same way USENET users used to share killfiles.
Anyway, my point isn't to try and design a full system here. It's research level stuff. Only to point out that this stuff brings spam/abuse control out of BigTech-only world back into the realm of small scripts that can be written and shared by users in a decentralized way.
And in a world that has zero outliers or unusual users. In reality, I guarantee my accessibility software would get flagged as emulated input (because it is) and marked as spam.
You didn't protect non-tech savvy users at all, on the contrary, you introduced a point of failure for their devices. Some have customized ones which would need to be verified. Doesn't sound like a good idea at all.