zlacker

[parent] [thread] 19 comments
1. theshr+(OP)[view] [source] 2026-01-16 09:24:09
I've been trying to manifest Web of Trust coming back to help people navigate towards content that's created by humans.

A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.

replies(7): >>jacque+z2 >>IsTom+86 >>thephy+wo >>jaunty+AK1 >>dizhn+mp3 >>solair+fL3 >>171862+xy4
2. jacque+z2[view] [source] 2026-01-16 09:52:31
>>theshr+(OP)
You need a very complex weighing and revocation mechanism because once one bad player is in your web of trust they can become a node along which both other bad players and good players alike can join.
replies(4): >>embedd+Q3 >>theshr+B7 >>thephy+dp >>0ckpup+XN1
◧◩
3. embedd+Q3[view] [source] [discussion] 2026-01-16 10:07:31
>>jacque+z2
Build a tree, cut the tree at the first link, now you get rid of all of them. Will have some collateral damage though, but maybe safe to assume actually "good players" can rejoin at another maybe more stable leaf
replies(1): >>jacque+e5
◧◩◪
4. jacque+e5[view] [source] [discussion] 2026-01-16 10:21:22
>>embedd+Q3
It's a web, not a tree... so this is really not that simple.
replies(1): >>embedd+o6
5. IsTom+86[view] [source] 2026-01-16 10:31:30
>>theshr+(OP)
Unfortunately trust isn't transitive.
◧◩◪◨
6. embedd+o6[view] [source] [discussion] 2026-01-16 10:33:39
>>jacque+e5
Yeah, that's the problem, and my suggestion is to change it from a web to a tree instead, to solve that issue.
replies(2): >>theshr+H7 >>jacque+4j
◧◩
7. theshr+B7[view] [source] [discussion] 2026-01-16 10:47:22
>>jacque+z2
Then I can see who added that bad player and cut off everyone who trusted them (or decrease the trust level if the system allows that).
◧◩◪◨⬒
8. theshr+H7[view] [source] [discussion] 2026-01-16 10:48:10
>>embedd+o6
What is a web if not multiple trees that have interconnected branches? :)
replies(1): >>embedd+lc
◧◩◪◨⬒⬓
9. embedd+lc[view] [source] [discussion] 2026-01-16 11:33:48
>>theshr+H7
In the end, it's all lists anyways :)
replies(1): >>foobar+074
◧◩◪◨⬒
10. jacque+4j[view] [source] [discussion] 2026-01-16 12:41:26
>>embedd+o6
That does not work because you won't have multiple parties vouching for a new entrant. That's the whole reason a web was chosen instead of a tree in the first place. Trees are super fragile in comparison, bad actors would have a much bigger chance of going undetected in a tree like arrangement.
11. thephy+wo[view] [source] 2026-01-16 13:32:35
>>theshr+(OP)
I would go even further. I only want to see content created by people who are in a chain of trust with me.

AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).

But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.

◧◩
12. thephy+dp[view] [source] [discussion] 2026-01-16 13:38:12
>>jacque+z2
Trust in the real world is not immutable. It is constantly re-evaluated. So the Web of Trust concept should do this as well.

Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.

The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.

replies(1): >>phopla+iu2
13. jaunty+AK1[view] [source] 2026-01-16 19:58:58
>>theshr+(OP)
At protocol (Bluesky) will I hope have better trust signals, since your Personal Data Server stores your microblog/posts and a bunch of other data. And the data is public. It's much harder to convincingly fake being a cross-media human.

If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.

More explicit trust signalling would be very fun to add.

◧◩
14. 0ckpup+XN1[view] [source] [discussion] 2026-01-16 20:15:36
>>jacque+z2
aka clown explosion
◧◩◪
15. phopla+iu2[view] [source] [discussion] 2026-01-17 00:51:25
>>thephy+dp
I really don't want to expand the surveillance state...
replies(1): >>theshr+dO3
16. dizhn+mp3[view] [source] 2026-01-17 13:05:47
>>theshr+(OP)
Trust and do what with it though. I trust Chomsky but I can mark his interviews "Don't show" because I'm sick of them. Or like Facebook lets your follow 'friend' but ignore them. So trust and do what with that trust? A network of people who'll let each other move on short notice ? Something like that?
17. solair+fL3[view] [source] 2026-01-17 16:06:51
>>theshr+(OP)
I've been thinking this exact thing! But it's too abstract a thought for me to try creating anything yet.

A curation network, one which uses SSL-style chain-of-trust (and RSS-style feeds maybe?) seems like it could be a solution, but I'm not able to advance the thought from just being an amorphous idea.

◧◩◪◨
18. theshr+dO3[view] [source] [discussion] 2026-01-17 16:26:02
>>phopla+iu2
Are GPG signing parties part of the “surveillance state”?

It is the exact thing this system needs

◧◩◪◨⬒⬓⬔
19. foobar+074[view] [source] [discussion] 2026-01-17 18:20:08
>>embedd+lc
Well - lists of tuples. Otherwise knows as a graph :)
20. 171862+xy4[view] [source] 2026-01-17 21:13:08
>>theshr+(OP)
The problem is that even the people I would happily take advise from when meeting in real life, occasionally mindlessly copy AI-output about subjects they don't know about. And they see nothing wrong with it.
[go to top]