A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.
AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).
But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.
Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.
The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.
If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.
More explicit trust signalling would be very fun to add.
A curation network, one which uses SSL-style chain-of-trust (and RSS-style feeds maybe?) seems like it could be a solution, but I'm not able to advance the thought from just being an amorphous idea.
It is the exact thing this system needs