zlacker

[parent] [thread] 34 comments
1. oneeye+(OP)[view] [source] 2026-01-16 09:07:58
We've enjoyed a certain period (at least a couple of decades) of global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.
replies(5): >>theshr+A1 >>embedd+j5 >>agumon+X7 >>globul+Gf >>jruoho+oY2
2. theshr+A1[view] [source] 2026-01-16 09:24:09
>>oneeye+(OP)
I've been trying to manifest Web of Trust coming back to help people navigate towards content that's created by humans.

A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.

replies(7): >>jacque+94 >>IsTom+I7 >>thephy+6q >>jaunty+aM1 >>dizhn+Wq3 >>solair+PM3 >>171862+7A4
◧◩
3. jacque+94[view] [source] [discussion] 2026-01-16 09:52:31
>>theshr+A1
You need a very complex weighing and revocation mechanism because once one bad player is in your web of trust they can become a node along which both other bad players and good players alike can join.
replies(4): >>embedd+q5 >>theshr+b9 >>thephy+Nq >>0ckpup+xP1
4. embedd+j5[view] [source] 2026-01-16 10:06:26
>>oneeye+(OP)
> global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life

I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.

I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.

Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.

replies(3): >>contra+fb >>willis+Pf >>thephy+Xm
◧◩◪
5. embedd+q5[view] [source] [discussion] 2026-01-16 10:07:31
>>jacque+94
Build a tree, cut the tree at the first link, now you get rid of all of them. Will have some collateral damage though, but maybe safe to assume actually "good players" can rejoin at another maybe more stable leaf
replies(1): >>jacque+O6
◧◩◪◨
6. jacque+O6[view] [source] [discussion] 2026-01-16 10:21:22
>>embedd+q5
It's a web, not a tree... so this is really not that simple.
replies(1): >>embedd+Y7
◧◩
7. IsTom+I7[view] [source] [discussion] 2026-01-16 10:31:30
>>theshr+A1
Unfortunately trust isn't transitive.
8. agumon+X7[view] [source] 2026-01-16 10:33:19
>>oneeye+(OP)
trust in trust.. as programmer would say

the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient

◧◩◪◨⬒
9. embedd+Y7[view] [source] [discussion] 2026-01-16 10:33:39
>>jacque+O6
Yeah, that's the problem, and my suggestion is to change it from a web to a tree instead, to solve that issue.
replies(2): >>theshr+h9 >>jacque+Ek
◧◩◪
10. theshr+b9[view] [source] [discussion] 2026-01-16 10:47:22
>>jacque+94
Then I can see who added that bad player and cut off everyone who trusted them (or decrease the trust level if the system allows that).
◧◩◪◨⬒⬓
11. theshr+h9[view] [source] [discussion] 2026-01-16 10:48:10
>>embedd+Y7
What is a web if not multiple trees that have interconnected branches? :)
replies(1): >>embedd+Vd
◧◩
12. contra+fb[view] [source] [discussion] 2026-01-16 11:11:02
>>embedd+j5
Your tone is disagreement, but it's not clear why?

There is an individual who you trust to do good work, and who works well with you. They're not anonymous. Addressing the topic of this thread, you know (or should know) that it is not AI slop.

That is a significant amount of knowledge and trust in an individual, and the very point I thought the GP was making.

◧◩◪◨⬒⬓⬔
13. embedd+Vd[view] [source] [discussion] 2026-01-16 11:33:48
>>theshr+h9
In the end, it's all lists anyways :)
replies(1): >>foobar+A84
14. globul+Gf[view] [source] 2026-01-16 11:49:44
>>oneeye+(OP)
Some projects, like Linux (the kernel) have always been developed that way. Linus has described the trust model in the kernel to be very much "web of trust". You don't just submit patches directly to Linus, you submit them to module maintainers who are trusted by subsystem maintainers and who are all ultimately, indirectly trusted by the branch maintainer (Linus).
◧◩
15. willis+Pf[view] [source] [discussion] 2026-01-16 11:51:06
>>embedd+j5
How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

These are very important questions that cut to the heart of "what is art".

replies(1): >>embedd+Zk
◧◩◪◨⬒⬓
16. jacque+Ek[view] [source] [discussion] 2026-01-16 12:41:26
>>embedd+Y7
That does not work because you won't have multiple parties vouching for a new entrant. That's the whole reason a web was chosen instead of a tree in the first place. Trees are super fragile in comparison, bad actors would have a much bigger chance of going undetected in a tree like arrangement.
◧◩◪
17. embedd+Zk[view] [source] [discussion] 2026-01-16 12:43:46
>>willis+Pf
> How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

Unless AI companies already developed and launched plugins/extensions for people to do something that looks like hand drawn sketches inside of Clip Studio, and suddenly got a lot better at understanding prompts (including having inspiration of their own), then I'm pretty sure it's a human.

I don't think I'd get to see in-progress sketches and it wouldn't be as good at understanding what I wanted to have had changes then. I've used various generative AI image generators (latest one Qwen Image 2511 and a whole bunch of others) and none of them, including with "prompt enhancements" can take very vague descriptions of "I want it to feel like X" or "I'm not sure about Y but something like Z" and turn it into something that looks acceptable. At least not yet.

And because I've spent a lot of time with various generative image making processes and models, I'm fairly confident I'd recognize if that was what was happening.

replies(1): >>willis+MB
◧◩
18. thephy+Xm[view] [source] [discussion] 2026-01-16 13:03:10
>>embedd+j5
I think it absolutely is coming to an end in lots of ways.

Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

The better AI gets at slop and controlling bots to create slop which is indistinguishable from human content, the less people will trust content on those platforms.

Your trust relationship with your artist almost certainly was based on something other than just contact info. Usually you review a portfolio, a professional profile, and you start with a small project to limit your downside risk. This tentative relationship and phased stages where trust is increased is how human trust relationships have always worked.

replies(1): >>embedd+Vn
◧◩◪
19. embedd+Vn[view] [source] [discussion] 2026-01-16 13:11:18
>>thephy+Xm
> Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.

That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.

> the less people will trust content on those platforms.

Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.

"Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.

replies(1): >>thephy+2s
◧◩
20. thephy+6q[view] [source] [discussion] 2026-01-16 13:32:35
>>theshr+A1
I would go even further. I only want to see content created by people who are in a chain of trust with me.

AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).

But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.

◧◩◪
21. thephy+Nq[view] [source] [discussion] 2026-01-16 13:38:12
>>jacque+94
Trust in the real world is not immutable. It is constantly re-evaluated. So the Web of Trust concept should do this as well.

Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.

The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.

replies(1): >>phopla+Sv2
◧◩◪◨
22. thephy+2s[view] [source] [discussion] 2026-01-16 13:46:54
>>embedd+Vn
LLMs reduce the marginal cost per unit of content.

When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.

There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.

replies(1): >>pixl97+Zm1
◧◩◪◨
23. willis+MB[view] [source] [discussion] 2026-01-16 14:50:34
>>embedd+Zk
Sure, it's true today. Entertain the hypothetical though because this is what the trillion dollar rush is aspiring to do in the near future. We should be thinking about our answers now.
replies(1): >>embedd+BC
◧◩◪◨⬒
24. embedd+BC[view] [source] [discussion] 2026-01-16 14:55:03
>>willis+MB
Answers to what? Do I care what tools the artist use as long I get the results I want? I don't understand what you see as the issue, that I somehow think I'd be working with a human but it was a machine?
replies(1): >>willis+ebb
◧◩◪◨⬒
25. pixl97+Zm1[view] [source] [discussion] 2026-01-16 18:18:29
>>thephy+2s
>When snail mail had a cost floor of $0.25 for the price of postage

Hence I suspect that quite a few of these interfaces that are now being spammed with AI crap will end up implementing something that represents a fee, a paywall, or a trustwall. That should keep armies of AI slop responses from being worthwhile.

How we do that without killing some communities is yet to be seen.

◧◩
26. jaunty+aM1[view] [source] [discussion] 2026-01-16 19:58:58
>>theshr+A1
At protocol (Bluesky) will I hope have better trust signals, since your Personal Data Server stores your microblog/posts and a bunch of other data. And the data is public. It's much harder to convincingly fake being a cross-media human.

If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.

More explicit trust signalling would be very fun to add.

◧◩◪
27. 0ckpup+xP1[view] [source] [discussion] 2026-01-16 20:15:36
>>jacque+94
aka clown explosion
◧◩◪◨
28. phopla+Sv2[view] [source] [discussion] 2026-01-17 00:51:25
>>thephy+Nq
I really don't want to expand the surveillance state...
replies(1): >>theshr+NP3
29. jruoho+oY2[view] [source] 2026-01-17 07:12:46
>>oneeye+(OP)
> Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.

I'd add science here too.

◧◩
30. dizhn+Wq3[view] [source] [discussion] 2026-01-17 13:05:47
>>theshr+A1
Trust and do what with it though. I trust Chomsky but I can mark his interviews "Don't show" because I'm sick of them. Or like Facebook lets your follow 'friend' but ignore them. So trust and do what with that trust? A network of people who'll let each other move on short notice ? Something like that?
◧◩
31. solair+PM3[view] [source] [discussion] 2026-01-17 16:06:51
>>theshr+A1
I've been thinking this exact thing! But it's too abstract a thought for me to try creating anything yet.

A curation network, one which uses SSL-style chain-of-trust (and RSS-style feeds maybe?) seems like it could be a solution, but I'm not able to advance the thought from just being an amorphous idea.

◧◩◪◨⬒
32. theshr+NP3[view] [source] [discussion] 2026-01-17 16:26:02
>>phopla+Sv2
Are GPG signing parties part of the “surveillance state”?

It is the exact thing this system needs

◧◩◪◨⬒⬓⬔⧯
33. foobar+A84[view] [source] [discussion] 2026-01-17 18:20:08
>>embedd+Vd
Well - lists of tuples. Otherwise knows as a graph :)
◧◩
34. 171862+7A4[view] [source] [discussion] 2026-01-17 21:13:08
>>theshr+A1
The problem is that even the people I would happily take advise from when meeting in real life, occasionally mindlessly copy AI-output about subjects they don't know about. And they see nothing wrong with it.
◧◩◪◨⬒⬓
35. willis+ebb[view] [source] [discussion] 2026-01-20 03:18:14
>>embedd+BC
A deceit isn't the issue. The issue is that person you're paying is going to be undercut this year by machines, along with probably 100M other people selling their labor.
[go to top]