A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.
I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.
I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.
Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.
the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient
There is an individual who you trust to do good work, and who works well with you. They're not anonymous. Addressing the topic of this thread, you know (or should know) that it is not AI slop.
That is a significant amount of knowledge and trust in an individual, and the very point I thought the GP was making.
These are very important questions that cut to the heart of "what is art".
Unless AI companies already developed and launched plugins/extensions for people to do something that looks like hand drawn sketches inside of Clip Studio, and suddenly got a lot better at understanding prompts (including having inspiration of their own), then I'm pretty sure it's a human.
I don't think I'd get to see in-progress sketches and it wouldn't be as good at understanding what I wanted to have had changes then. I've used various generative AI image generators (latest one Qwen Image 2511 and a whole bunch of others) and none of them, including with "prompt enhancements" can take very vague descriptions of "I want it to feel like X" or "I'm not sure about Y but something like Z" and turn it into something that looks acceptable. At least not yet.
And because I've spent a lot of time with various generative image making processes and models, I'm fairly confident I'd recognize if that was what was happening.
Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.
The better AI gets at slop and controlling bots to create slop which is indistinguishable from human content, the less people will trust content on those platforms.
Your trust relationship with your artist almost certainly was based on something other than just contact info. Usually you review a portfolio, a professional profile, and you start with a small project to limit your downside risk. This tentative relationship and phased stages where trust is increased is how human trust relationships have always worked.
But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.
That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.
> the less people will trust content on those platforms.
Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.
"Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.
AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).
But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.
Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.
The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.
When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.
There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.
Hence I suspect that quite a few of these interfaces that are now being spammed with AI crap will end up implementing something that represents a fee, a paywall, or a trustwall. That should keep armies of AI slop responses from being worthwhile.
How we do that without killing some communities is yet to be seen.
If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.
More explicit trust signalling would be very fun to add.
I'd add science here too.
A curation network, one which uses SSL-style chain-of-trust (and RSS-style feeds maybe?) seems like it could be a solution, but I'm not able to advance the thought from just being an amorphous idea.
It is the exact thing this system needs