https://x.com/mitchellh/status/2020252149117313349
https://nitter.net/mitchellh/status/2020252149117313349
https://github.com/ghostty-org/ghostty/pull/10559
However, it's not hard to envision a future where the exact opposite will be occur: a few key AI tools/models will become specialized and better at coding/testing in various platforms than humans and they will ignore or de-prioritize our input.
After that ships we'll continue doing a lot of rapid exploration given there's still a lot of ways to improve here. We also just shipped some issues related features here like comment pinning and +1 comment steering [1] to help cut through some noise.
Interested though to see what else emerges like this in the community, I expect we'll see continued experimentation and that's good for OSS.
[1] https://github.blog/changelog/2026-02-05-pinned-comments-on-...
For a single organisation, a list of vouched users sounds great. GitHub permissions already support this.
My concern is with the "web" part. Once you have orgs trusting the vouch lists of other orgs, you end up with the classic problems of decentralised trust:
1. The level of trust is only as high as the lax-est person in your network 2. Nobody is particularly interested in vetting new users 3. Updating trust rarely happens
There _is_ a problem with AI Slop overrunning public repositories. But WoT has failed once, we don't need to try it again.
This is similar to real life: if you vouch for someone (in business for example), and they scam them, your own reputation suffers. So vouching carries risk. Similarly, if you going around someone is unreliable, but people find out they actually aren't, your reputation also suffers. If vouching or denouncing become free, it will become too easy to weaponize.
Then again, if this is the case, why would you risk your own reputation to vouch for anyone anyway.
(EDIT: Thanks sparky_z for the correction of my spelling!)
“After we left Samble I began trying to obtain access to certain reticules,” Sammann explained. “Normally these would have been closed to me, but I thought I might be able to get in if I explained what I was doing. It took a little while for my request to be considered. The people who control these were probably searching the Reticulum to obtain corroboration for my story.”
“How would that work?” I asked.
Sammann was not happy that I’d inquired. Maybe he was tired of explaining such things to me; or maybe he still wished to preserve a little bit of respect for the Discipline that we had so flagrantly been violating. “Let’s suppose there’s a speelycaptor at the mess hall in that hellhole town where we bought snow tires.”
“Norslof,” I said.
“Whatever. This speelycaptor is there as a security measure. It sees us walking to the till to pay for our terrible food. That information goes on some reticule or other. Someone who studies the images can see that I was there on such-and-such a date with three other people. Then they can use other such techniques to figure out who those people are. One turns out to be Fraa Erasmas from Saunt Edhar. Thus the story I’m telling is corroborated.”
“Okay, but how—”
“Never mind.” Then, as if he’d grown weary of using that phrase, he caught himself short, closed his eyes for a moment, and tried again. “If you must know, they probably ran an asamocra on me.”
“Asamocra?”
“Asynchronous, symmetrically anonymized, moderated open-cry repute auction. Don’t even bother trying to parse that. The acronym is pre-Reconstitution. There hasn’t been a true asamocra for 3600 years. Instead we do other things that serve the same purpose and we call them by the old name. In most cases, it takes a few days for a provably irreversible phase transition to occur in the reputon glass—never mind—and another day after that to make sure you aren’t just being spoofed by ephemeral stochastic nucleation. The point being, I was not granted the access I wanted until recently.” He smiled and a hunk of ice fell off his whiskers and landed on the control panel of his jeejah. “I was going to say ‘until today’ but this damned day never ends.”
“Fine. I don’t really understand anything you said but maybe we can save that for later.”
“That would be good. The point is that I was trying to get information about that rocket launch you glimpsed on the speely.”*
Maybe your own vouch score goes up when someone you vouched for contributes to a project?
Good reason to be careful. Maybe there's a bit of an upside to: if you vouch for someone who does good work, then you get a little boost too. It's how personal relationships work anyway.
----------
I'm pretty skeptical of all things cryptocurrency, but I've wondered if something like this would be an actually good use case of blockchain tech…
You might think this is science fiction, but the companies that brought you LLMs had the goal to pursue AGI and all its consequences. They failed today, but that has always been the end game.
It didn't work for links as reputation for search once "SEO" people started creating link farms. It's worse now. With LLMs, you can create fake identities with plausible backstories.
This idea won't work with anonymity. It's been tried.
One of my (admittedly half baked) ideas was a vouching similar with real world or physical incentives. Basically signing up requires someone vouching, similar to this one where there is actual physical interaction between the two. But I want to take it even further -- when you signup your real life details are "escrowed" in the system (somehow), and when you do something bad enough for a permaban+, you will get doxxed.
Another thing that is amusing is that Sam Altman invented this whole human validation device (Worldcoin) but it can't actually serve a useful purpose here because it's not enough to say you are who you are. You need someone to say you're a worthwhile person to listen to.
Then you have introverts that can be good but have no connections and won’t be able to get in.
So you’re kind of selecting for connected and good people.
[1]: https://blog.discourse.org/2018/06/understanding-discourse-t...
But using this to vouch for others as a way to indicate trust is going to be dangerous. Accounts can be compromised, people make mistakes, and different people have different levels of trust.
I'd like to see more attention placed in verifying released content. That verification should be a combination of code scans for vulnerabilities, detection of a change in capabilities, are reproducible builds of the generated artifacts. That would not only detect bad contributions, but also bad maintainers.
Xkcd 483 is directly referencing Anathem so that should be unsurprising but I think in both His Dark Materials (e.g. anbaric power) and in Anathem it is in-universe explained. The isomorphism between that world and our world is explicitly relevant to the plot. It’s the obvious foreshadowing for what’s about to happen.
The worlds are similar with different names because they’re parallel universes about to collide.
Feels like making a messaging app but "how messages are delivered and to whom is left to the user to implement".
I think "who and how someone is vouched" is like 99.99% of the problem and they haven't tried to solve it so it's hard to see how much value there is here. (And tbh I doubt you really can solve this problem in a way that doesn't suck.)
I don't think that's true? The goal of vouch isn't to say "@linus_torvalds is Linus Torvalds" it's to say "@linus_torvalds is a legitimate contributor an not an AI slopper/spammer". It's not vouching for their real world identity, or that they're a good person, or that they'll never add malware to their repositories. It's just vouching for the most basic level of "when this person puts out a PR it's not AI slop".
Probably the idea is to eventually have these as some sort of public repo where you can merge files from arbitrary projects together? Or inherit from some well known project’s config?
The problem is at the social level. People will not want to maintain their own vouch/denounce lists because they're lazy. Which means if this takes off, there will be centrally maintained vouchlists. Which, if you've been on the internet for any amount of time, you can instantly imagine will lead to the formation of cliques and vouchlist drama.
Someone who reads A Clockwork Orange will unavoidably pick up a few words of vaguely-Russian extraction by the end of it, so maybe it's possible to take advantage of that. The main problem I can see is that the new language's sentence grammar will also have to be blended in, and that won't go as smoothly.
The real problem are reputation-farmers. They open hundreds of low-effort PRs on GitHub in the hope that some of them get merged. This will increase the reputation of their accounts, which they hope will help them stand out when applying for a job. So the solution would be for GitHub to implement a system to punish bad PRs. Here is my idea:
- The owner of a repo can close a PR either neutrally (e.g. an earnest but misguided effort was made), positively (a valuable contribution was made) or negatively (worthless slop)
- Depending on how the PR was closed the reputation rises or drops
- Reputation can only be raised or lowered when interacting with another repo
The last point should prevent brigading, I have to make contact with someone before he can judge me, and he can only judge me once per interaction. People could still farm reputation by making lots of quality PRs, but that's actually a good thing. The only bad way I can see this being gamed is if a bunch of buddies get together and merge each other's garbage PRs, but people can already do that sort of thing. Maybe the reputation should not be a total sum, but per project? Anyway, the idea is for there to be some negative consequences for people opening junk PRs.
Honestly, my view is that this is a technical solution for a cultural problem. Particularly in the last ~10 years, open source has really been pushed into a "corporate dress rehearsal" culture. All communication is expected to be highly professional. Talk to everyone who opens an issue or PR with the respect you would a coworker. Say nothing that might offend anyone anywhere, keep it PG-13. Even Linus had to pull back on his famously virtiolic responses to shitty code in PRs.
Being open and inclusive is great, but bad actors have really exploited this. The proper response to an obviously AI-generated slop PR should be "fuck off", closing the PR, and banning them from the repo. But maintainers are uncomfortable with doing this directly since it violates the corporate dress rehearsal kayfabe, so vouch is a roundabout way of accomplishing this.
...or spam "RBL" lists which were often shared. https://en.wikipedia.org/wiki/Domain_Name_System_blocklist
Not sure about the trust part. Ideally, you can evaluate the change on its own.
In my experience, I immediately know whether I want to close or merge a PR within a few seconds, and the hard part is writing the response to close it such that they don't come back again with the same stuff.
(I review a lot of PRs for openpilot - https://github.com/commaai/openpilot)
If you get denounced on a popular repo and everyone "inherits" that repo as a source of trust (e.g. think email providers - Google decides you are bad, good luck).
Couple with the fact that usually new contributors take some time to find their feet.
I've only been at this game (SWE) for ~10 years so not a long time. But I can tell you my first few contributions were clumsy and perhaps would have earned my a denouncement.
I'm not sure if I would have contributed to the AWS SDK, Sendgrid, Nunit, New Relic (easily my best experience) and my attempted contribution to Npgsql (easily my worst experience) would have definitely earned me a denouncement.
Concept is good, but I would omit the concept of denouncement entirely.
It also addresses the issue in tolerating unchecked or seemingly plausible slop PRs from outside contributors from ever getting merged in easily. By default, they are all untrusted.
Now this social issue has been made worse by vibe-coded PRs; and untrusted outside contributors should instead earn their access to be 'vouched' by the core maintainers rather than them allowing a wild west of slop PRs.
A great deal.
I get that AI is creating a ton of toil to maintainers but this is not the solution.
This is maturation, open source being professional is a good sign for the future
why not use ai to help with the ai problem, why prefer this extra coordination effort and implementation?
I can't check out unless I pay. How is that feedback?
OVER-Denouncing ought to be tracked, too, for a user's trustworthiness profile.
edit; and just to be totally clear this isn't an anti-AI statement. You can still make valid, even good PRs with AI. Mitchell just posted about using AI himself recently[1]. This is about AI making it easy for people to spam low-quality slop in what is essentially a DoS attack on maintainers' attention.
So the really funny thing here is the first bitcoin exchange had a Web of Trust system, and while it had it's flaws IT WORKED PRETTY WELL. It used GPG and later on bitcoin signatures. Nobody talks about it unless they were there but the system is still online. Keep in mind, this was used before centralized exchanges and regulation. It did not use a blockchain to store ratings.
As a new trader, you basically could not do trades in their OTC channel without going through traders that specialized in new people coming in. Sock accounts could rate each other, but when you checked to see if one of those scammers were trustworthy, they would have no level-2 trust since none of the regular traders had positive ratings of them.
Here's a link to the system: https://bitcoin-otc.com/trust.php (on IRC, you would use a bot called gribble to authenticate)
Think denying access to production. But allowing changes to staging. Prove yourself in the lower environments (other repos, unlocked code paths) in order to get access to higher envs.
Hell, we already do this in the ops world.
I certainly have dropped off when projects have burdensome rules, even before ai slop fest
Even with that risk I think a reputation based WoT is preferable to most alternatives. Put another way: in the current Wild West, there’s no way to identify, or track, or impose opportunity costs on transacting with (committing or using commits by) “Epstein but in code”.
If you had left it at know you want to reject a PR within a few seconds, that'd be fine.
Although with safety critical systems I'd probably want each contributor to have some experience in the field too.
Also, upvotes and merge decisions may well come from different people, who happen to disagree. This is in fact healthy sometimes.
If that worked, then there would be an epidemic of phone scammers or email phishers having epiphanies and changing careers when their victims reply with (well deserved) angry screeds.
The same as when you vouch for your company to hire someone - because you will benefit from their help.
I think your suggestion is a good one.
Point is: when @lt100, @lt101, … , @lt999 all vouch for something, it’s worthless.
That means you, like John Henry, are competing against a machine at the thing that machine was designed to do.
Not easily, but I could imagine a project deciding to trust (to some degree) people vouched for by another project whose judgement they trust. Or, conversely, denouncing those endorsed by a project whose judgement they don't trust.
In general, it seems like a web of trust could cross projects in various ways.
Even if I trust you, I still need to review your work before merging it.
Good people still make mistakes.
This is a graph search. If the person you’re evaluating vouches for people those you vouch for denounce, then even if they aren’t denounced per se, you have gained information about how trustworthy you would find that person. (Same in reverse. If they vouch for people who your vouchers vouch for, that indirectly suggests trust even if they aren’t directly vouched for.)
The enshitification of GitHub continues
But I like the idea and principle. OSS need this and it's traded very lightly.
Ya, I'm just wondering how this system avoids a 51% attack. Simply put there are a fixed number of human contributers, but effectively an infinite number of bot contributers.
There's likely no perfect solution, only layers and data points. Even if one of the layers only provides a level of trust as high as the most lax person in the network, it's still a signal of something. The internet will continue to evolve and fracture into segments with different requirements IMHO.
This is the level of response these PRs deserve. What people shouldn't be doing is treating these as good-faith requests and trying to provide feedback or asking them to refactor, like they're mentoring a junior dev. It'll just fall on deaf ears.
FOSS has turned into an exercise in scammer hunting.
if someone fresh wants to contribute, now they will have to network before they can write code
honestly i don't see my self networking just so that i can push my code
I think there are valid ways to increase the outcome, like open source projects codifying the focus areas during each month, or verifying the PRs, or making PRs show proof of working etc,... many ways to deter folks who don't want to meaningfully contribute and simply ai generate and push the effort down the real contributors
If PR is good, maintainer refunds you ;)
I noticed the same thing in communication. Communication is now so frictionless, that almost all the communication I receive is low quality. If it cost more to communicate, the quality would increase.
But the value of low quality communication is not zero: it is actively harmful, because it eats your time.
There are obvious cases in Europe (well, were if you mean the EU) where there need not be criminal behaviour to maintain a list of people that no landlord in a town will allow into their pubs, for example.
I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.
It spreads the effort for maintaining the list of trusted people, which is helpful. However I still see a potential firehose of randoms requesting to be vouched for. Various ways one might manage that, perhaps even some modest effort preceding step that would demonstrate understanding of the project / willingness to help, such as A/B triaging of several pairs of issues, kind of like a directed, project relevant CAPTCHA?
- a problem already solved in TFA (you vouching for someone eventually denounced doesn't prevent you from being denounced, you can totally do it)
- a per-repo, or worse, global, blockchain to solve incrementing and decrementing integers (vouch vs. denounce)
- a lack of understanding that automated global scoring systems are an abuse vector and something people will avoid. (c.f. Black Mirror and social credit scores in China)
You can also integrate it in clients by adding payment/reward claim headers.
1. What’s the goal of this PR and how does it further our project’s goals?
2. Is this vaguely the correct implementation?
Evaluating those two takes a few seconds. Beyond that, yes it takes a while to review and merge even a few line diff.
This is from the twitter post referenced above, and he says the same thing in the ghostty issue. Can anyone link to discussion on that or elaborate?
(I briefly looked at the pi repo, and have looked around in the past but don't see any references to this vouching system.)
GitHub customers really are willing to do anything besides coming to terms with the reality confronting them: that it might be GitHub (and the GitHub community/userbase) that's the problem.
To the point that they'll wax openly about the whole reason to stay with GitHub over modern alternatives is because of the community, and then turn around and implement and/or ally themselves with stuff like Vouch: A Contributor Management System explicitly designed to keep the unwashed masses away.
Just set up a Bugzilla instance and a cgit frontend to a push-over-ssh server already, geez.
It is not a cookie banner law. The american seems to keep forgetting that it's about personal data, consent, and the ability to take it down. The sharing of said data is particularly restricted.
And of course, this applies to black list, including for fraud.
Regulators have enforced this in practice. For example in the Netherlands, the tax authority was fined for operating a “fraud blacklist” without a statutory basis, i.e., illegal processing under GDPR: https://www.autoriteitpersoonsgegevens.nl/en/current/tax-adm...
The fact is many such lists exist without being punished. Your landlord list for example. That doesn't make it legal, just no shutdown yet.
Because there is no legal basis for it, unless people have committed, again, an illegal act (such as destroying the pub property). Also it's quite difficult to have people accept to be on a black list. And once they are, they can ask for their data to be taken down, which you cannot refuse.
Spam filters exist. Why do we need to bring politics into it? Reminds me of the whole CoC mess a few years back.
Every time somebody talks about a new AI thing the lament here goes:
> BUT THINK OF THE JUNIORS!
How do you expect this system to treat juniors? How do your juniors ever gain experience committing to open source? who vouches for them?
This is a permanent social structure for a transient technical problem.
Would people recommend it? I feel like I have such huge inertia for changing shells at this point that I've rarely seriously considered it.
But a non-zero cost of communication can obviously also have negative effects. It's interesting to think about where the sweet spot would be. But it's probably very context specific. I'm okay with close people engaging in "low quality" communication with me. I'd love, on the other hand, if politicians would stop communicating via Twitter.
GitHub and LLMs have reduced the friction to the point where it's overwhelming human reviewers. Removing that friction would be nice if it didn't cause problems of its own. It turns out that friction had some useful benefits, and that's why you're seeing the pendulum swing the other way.
Your solution advocates a
( ) technical (X) social ( ) policy-based ( ) forge-based
approach to solving AI-generated pull requests to open source projects. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws.)
( ) PR spammers can easily use AI to adapt to detection methods
( ) Legitimate non-native English speakers' contributions would be affected
( ) Legitimate users of AI coding assistants would be affected
( ) It is defenseless against determined bad actors
( ) It will stop AI slop for two weeks and then we'll be stuck with it
(X) Project maintainers don't have time to implement it
(X) Requires immediate total cooperation from maintainers at once
(X) False positives would drive away genuine new contributors
Specifically, your plan fails to account for
(X) Ease of creating new GitHub accounts
(X) Script kiddies and reputation farmers
( ) Armies of LLM-assisted coding tools in legitimate use
(X) Eternal arms race involved in all detection approaches
( ) Extreme pressure on developers to use AI tools
(X) Maintainer burnout that is unaffected by automated filtering
( ) Graduate students trying to pad their CVs
( ) The fact that AI will only get better at mimicking humans
and the following philosophical objections may also apply:
(X) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
(X) Allowlists exclude new contributors
(X) Blocklists are circumvented in minutes
( ) We should be able to use AI tools without being censored
(X) Countermeasures must work if phased in gradually across projects
( ) Contributing to open source should be free and open
(X) Feel-good measures do nothing to solve the problem
(X) This will just make maintainer burnout worse
Furthermore, this is what I think about you:
(X) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out what project you maintain and
send you 50 AI-generated PRs!Only if you allow people like this to normalize it.
A poorly thought out hypothetical, just to illustrate: Make a connection at a dinner party? Sure, technically it costs 10¢ make that initial text message/phone call, then the next 5 messages are 1¢ each, but thereafter all the messages are free. Existing relationships: free. New relationships, extremely cheap. Spamming at scale: more expensive.
I have no idea if that's a good idea or not, but I think that's an ok representation of the idea.
Let's say you're a one-of-a-kind kid that already is making useful contributions, but $1 is a lot of money for you, then suddenly your work becomes useless?
It feels weird to pay for providing work anyway. Even if its LLM gunk, you're paying to work (let alone pay for your LLM).
I was specifically thinking about general communication. Comparing the quality of communication in physical letters (from a time when that was the only affordable way to communicate) to messages we send each other nowadays.
Support Microsoft or be socially shunned?
We've seen it everywhere, in communication, in globalised manufacturing, now in code generation.
It takes nothing to throw something out there now; we're at a scale that there's no longer even a cost to personal reputation - everyone does it.
Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.
It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.
Alternatively they might keep some things open (issues, discussions) while requiring a vouch for PRs. Then, if folks want to get vouched, they can ask for that in discussions. Or maybe you need to ask via email. Or contact maintainers via Discord. It could be anything. Linux isn't developed on GitHub, so how do you submit changes there? Well you do so by following the norms and channels which the project makes visible. Same with Vouch.
I even see people hopping on chat servers begging to 'contribute' just to get github clout. It's really annoying.
You look at the PR and you know just by looking at it for a few seconds if it looks off or not.
Looks off -> "Want to close"
Write a polite response and close the issue.
Doesn't look off -> "Want to merge"
If we want to merge it, then of course you look at it more closely. Or label it and move on with the triage.
> The implementation is generic and can be used by any project on any code forge, but we provide GitHub integration out of the box via GitHub actions and the CLI.
And then see the trust format which allows for a platform tag. There isn't even a default-GitHub approach, just the GitHub actions default to GitHub via `--default-platform` flag (which makes sense cause they're being invoked ON GITHUB).
I am European, nice try though.
It is very unclear that this example falls foul of GDPR. On this basis, Git _itself_ fails at that, and no reasonable court will find it to be the case.
Its just a layer to minimize noise.
I've seen my share of zero-effort drive-by "contributions" so people can pad their GH profile, long before AI, on tiny obscure projects I have published there: larger and more prominent projects have always been spammed.
If anything, the AI-enabled flood will force the reckoning that was long time coming.
[0] >>46731646
So I can choose from github, gitlab or maybe codeberg? What about self-hosters, with project-specific forges? What about the fact that I have an account on multiple forges, that are all me?
This seems to be overly biased toward centralized services, which means it's just serving to further re-enforce Microsoft's dominance.
Yes, there's room for deception, but this is mostly about superhuman skills and newcomer ignorance and a new eternal September that we'll surely figure out
if not mistaken x11 is what mitchell is running rightn ow https://github.com/mitchellh/nixos-config/blob/0c42252d8951a...
Thing is, this system isn't supposed to be perfect. It is supposed to be better, while worth the hassle.
I doubt I'll get vouched anywhere (tho IMO it depends on context), but I firmly believe humanity (including me) will benefit from this system. And if you aren't a bad actor with bad intentions, I believe you will, too.
Only side effect is genuine contributors who aren't popular / in the know need to put in a little bit more effort. But again, that is part of worth the hassle. I'll take it for granted.
ie, if you want to contribute code, you must also contribute financially.
I'd hesitate to create the denounce function without speaking to an attorney; when someone's reputation and career are torpedoed by the chain reaction you created - with the intent of torpedoing reputations - they may name you in the lawsuit for damages and/or to compel you to undo the 'denounce'.
Not vouching for someone seems safe. No reason to get negative.
That would make not-refunding culturally crass unless it was warranted.
With manual options for:
0. (Default, refund)
1. (Default refund) + Auto-send discouragement response. (But allow it.)
2. (Default refund) + Block.
3. Do not refund
4. Do not refund + Auto-send discouragement response.
5. Do not refund + Block.
6. Do not refund + Block + Report SPAM (Boom!)
And typically use $1 fee, to discourage spam.
And $10 fee, for important, open, but high frequency addresses, as that covers the cost of reviewing high throughput email, so useful email did get identified and reviewed. (With the low quality communication subsidizing the high quality communication.)
The latter would be very useful in enabling in-demand contact doors to remain completely open, without being overwhelmed. Think of a CEO or other well known person, who does want an open channel of feedback from anyone, ideally, but is going to have to have someone vet feedback for the most impactful comments, and summarize any important trend in the rest. $10 strongly disincentives low quality communication, and covers the cost of getting value out of communication (for everyone).
In that world there's a process called "staking" where you lock some tokens with a default lock expiry action and a method to unlock based on the signature from both participants.
It would work like this: Repo has a public key. Submitted uses a smart contract to sign the commit with along with the submission of a crypto. If the repo merges it then the smart contract returns the token to the submitter. Otherwise it goes to the repo.
It's technically quite elegant, and the infrastructure is all there (with some UX issues).
But don't do this!!!!
I did some work in crypto. It's made me realize that the love of money corrupts, and because crypto brings money so close to engineering it corrupts good product design.
Problem 2 - getting banned by any single random project for any reason, like CoC disagreement, a heated Rust discussion, any world politics views etc. would lead to a system-wide ban in all involved project. Kinda like getting a ban for a bad YT comment and then your email and files are blocked forever too.
The idea is nice, like many other social improvement ideas. The reality will 99% depend on the actual implementation and actual usage.
Perhaps that is the plan?
Surely you mean this the other way around?
Mitchell is trying to address a social problem with a technical solution.
- When I buy an item I still have to click a "check out" link to enter my address and actually pay for the item. I could take days after buying the item to click that link. - Some sellers might not accept PayPal, instead after I check out I get the sellers bank information and have to manually wire the money. I could take days after checking out to actually perform the money transfer.
Moreover, I'm not interested in having my money get handed over to folks who aren't incentivized to refund my money. In fact, they're paying processing costs on the charge, so they are disincentivized to refund me! There could be an escrow service that handles this, but now there's another party involved: I just want to fix a damn bug, not deal with this shit.
The community might be a problem, but that doesn't mean it's a big enough problem to move off completely. Whitelisting a few people might be a good enough solution.
With just those primitives, CI is a service that emits "ci/tested." Review emits "review/approved." A merge controller watches for sufficient attestations and requests a ref update. The forge kernel only evaluates whether claims satisfy policy.
Vouch shifts this even further left: attestations about people, not just code. "This person is trusted" is structurally the same kind of signed claim as "this commit passed CI." It gates participation itself, not just mergeability.
All this should ideally be part of a repo, not inside a closed platform like github. I like it and am curious to see where this stands in 5 years.
If you zoom out to a few years you can see the same pattern over and over at different scales — big exodus event from Twitter followed by flattening out at level that is lower than the spike but higher than the steady state before the spike. At this point it would make sense to say this is just how Bluesky grows.
Besides that, the entire point of this project is to increase the barrier to entry for potential contributors (while ideally giving good new people a way in). So I really don’t think they’re worried about this problem.
How many important emails have been lost due to spam filters, how many important packets have been dropped by firewalls? Or, how much important email or important packets weren't sent because "it wasn't worth the hassle"? I'm sure all of that happened, but to which proportions? If it wasn't worth it, the measures would have been dropped. Same here: I regard it as a test, and if it isn't worth it, it'll be stopped. Personally, I run with a 'no spam' sticker on my physical postbox, as well as a 'no spam' for salesmen the former of which is enforced by national law.
FWIW, it is very funny to me, the people who ignore it: 1) very small businesses 2) shady businesses (possibly don't understanding the language?) 3) some charities who believe they're important (usually a nice response: 'oh, woops') 4) alt-right spammers who complain about the usual shit they find important (e.g. foreigners) 5) After 10 years I can report Jehova's have figured out the meaning of the texts (or remember to not bother here)!
It is my time, it is my door, my postbox. I'm the one who decide about it, not you.
Same here. It is their time, it is their project. They decide if you get to play along, and how. Their rules.
zig is too low level.
So the paywall email firewall will not work as desired.
If you zoom out the graph all the way you'll see that it's a decline for the past year. The slight uptick in the past 1-2 months can probably be attributed to other factors (eg. ICE protests riling the left up) than "[filter bubble] is how bluesky grows".
Have they shared the lists of developers they want prophylactically blackballed from the community yet?
Obviously technically the same things are possible but I gotta imagine there's a bit less noise on projects hosted on other platforms
Well, yea, I guess? That's pretty much how the whole system already works: if you're an attacker who's willing to spend a long time doing helpful beneficial work for projects, you're building a reputation that you can then abuse later until people notice you've gone bad.
This feels a bit https://xkcd.com/810/
> Unfortunately, the landscape has changed particularly with the advent of AI tools that allow people to trivially create plausible-looking but extremely low-quality contributions with little to no true understanding. Contributors can no longer be trusted based on the minimal barrier to entry to simply submit a change... So, let's move to an explicit trust model where trusted individuals can vouch for others, and those vouched individuals can then contribute.
And per https://github.com/mitchellh/vouch/blob/main/CONTRIBUTING.md :
> If you aren't vouched, any pull requests you open will be automatically closed. This system exists because open source works on a system of trust, and AI has unfortunately made it so we can no longer trust-by-default because it makes it too trivial to generate plausible-looking but actually low-quality contributions.
===
Looking at the closed PRs of this very project immediately shows https://github.com/mitchellh/vouch/pull/28 - which, true to form, is an AI generated PR that might have been tested and thought through by the submitter, but might not have been! The type of thing that can frustrate maintainers, for sure.
But how do you bootstrap a vouch-list without becoming hostile to new contributors? This seems like a quick way for a project to become insular/isolationist. The idea that projects could scrape/pull each others' vouch-lists just makes that a larger but equally insular community. I've seen well-intentioned prior art in other communities that's become downright toxic from this dynamic.
So, if the goal of this project is to find creative solutions to that problem, shouldn't it avoid dogfooding its own most extreme policy of rejecting PRs out of hand, lest it miss a contribution that suggests a real innovation?
I think the comparisons to dating apps are quite apt.
Edit: it also assumes contributors can't change opinions, which I suppose is also a dating issue
Simple as. He who is without sin can cast the first stone.
Amen.
The reason input should require a text field at least 5 lines long and 80 chars wide. This will influence the user to try to fill the box and provide more reason content, which results in higher quality signals.
Trust is a core security mechanism that the entire world depends on. It must be taken seriously and treated carefully.
I don't really see the issue, 'bubble', is a buzzword for what we used to call a community. You want to shrink viral online platforms to health, which is to say to a sustainable size of trusted and high quality contributors. Unqualified growth is the logic of both cancer and for-profit social media platforms, not of a functioning community of human beings.
Bluesky and Mastodon are a significantly more pleasant experience than Twitter or the Youtube comment section exactly because they turn most people away. If I were to manage a programming project, give me ten reliably contributors rather than a horde of slop programmers.
It makes sense if you are collaborating over IRC, but I feel the need to face palm when people sitting next to each other do it.
What is your preferred way to talk to your team?
No English, only code
Slack
Zoom
In a meeting room
Over lunch
On a walk
One thing I’ve learned over time is that the highest bandwidth way of talking is face to face because you can read body language in addition to words. Video chat is okay, but an artificial and often overly formal setting. Phone is faster than text. Text drops the audio/visual/emotional signal completely. Code is precise but requires reverse engineering intent.
I personally like a walk, and then pair programming a shared screen.
Would be happy to share the code, just lmk!
Once an account is already vouched, it will likely face far less scrutiny on future contributions — which could actually make it easier for bad actors to slip in malware or low-quality patches under the guise of trust.
https://weblog.masukomi.org/2018/03/25/zed-shaws-utu-saving-...
https://savingtheinternetwithhate.com/
DEFCON presentation: https://www.youtube.com/watch?v=ziTMh8ApMY4
If upstream can’t be bothered to fix such stuff (we’re talking major functionality gaps that a $10-100/month LLM can one-shot), isn’t my extremely well tested fix (typically a few dozen or maybe hundred lines) something they should accept?
The alternative is getting hard forked by an LLM, and having the fork evolve faster / better than upstream.
Telling people like me to f—— off is just going to accelerate irrelevance in situations like this.
I think that’ll also happen to most open source projects that adopt a policy of silent auto-rejection of contributions without review.
The contributions I’ve seen from such people in the open source projects I’ve worked on ranged from zero to negative value, and involved unusually large amounts of drama.
I can imagine things are different for some projects. Like maybe debian is trying to upstream a fix?
Even then, can’t they start the PR with a verifiable intro like “I maintain this package for debian.”?
For the other 99% of welcome contributions, intros typically are of the form: “I was hired to work on this by one of the industrial teams that maintain it”
Something to keep in mind if I'm ever looking to switch I guess.
Incoming bug reports or design docs an LLM could implement? Sure.
Maybe something like the Linux approach (tree of well-tested, thematic branches from lieutenants) would work better. We’d be happy to be lieutenants that shepherded our forks back to upstream.
This is very noble in theory, but in practice you're not going to get many high-quality PRs from someone who's never been paid to write software and has no financial support.
You have your fork and the fixes, the PR is just kindness on your part. If they don’t want it then just move on with your fork.
I once submitted a PR to some Salesforce helper SDK and the maintainer went on and on about approaches and refactoring etc. I just told him to take it or leave it, I don’t really care. I have my fork and fix already. They eventually merged it but I mean I didn’t care either way, I was just doing something nice for them.
You're always free to create a fork.
A few things come to mind (it's late here, so apologies in advance if they're trivial and not thought through):
- Threat Actors compromising an account and use it to Vouch for another account. I have a "hunch" it could fly under the radar, though admittedly I can't see how it would be different from another rogue commit by the compromised account (hence the hunch).
- Threat actors creating fake chains of trust, working the human factor by creating fake personas and inflating stats on Github to create (fake) credibility (like how number of likes on a video can cause other people to like or not, I've noticed I may not like a video if it has a low count which I would've if it had millions - could this be applied here somehow with the threat actor's inflated repo stats?)
- Can I use this to perform a Contribution-DDOS against a specific person?
The technical side of this seems easy enough. The human side, that seems more complicated.
Like, if I were your doctor or contractor or kid's schoolteacher or whoever you hadn't happened to already whitelist, and had sent you something important for you, and got that back as a response... I'm sure as heck not paying when I'm trying to send you something for your benefit.
Regarding your points:
"Threat Actors compromising an account..." You're spot on. A vouch-based system inevitably puts a huge target on high-reputation accounts. They become high-value assets for account takeovers.
"Threat actors creating fake chains of trust..." This is already prevalent in the crypto landscape... we saw similar dynamics play out recently with OpenClaw. If there is a metric for trust, it will be gamed.
From my experience, you cannot successfully layer a centralized reputation system over a decentralized (open contribution) ecosystem. The reputation mechanism itself needs to be decentralized, evolving, and heuristics-based rather than static.
I actually proposed a similar heuristic approach (on a smaller scale) for the expressjs repo a few months back when they were the first to get hit by mass low-quality PRs: https://gist.github.com/freakynit/c351872e4e8f2d73e3f21c4678... (sorry, couldn;t link to original comment due to some github UI issue.. was not showing me the link)
Major congratulations to the creator, you're doing god's work. And even if this particular project struggles or outright fails, I hope that it provides valuable insight for any follow-up web-of-trust projects on how to establish trust online.
The problem is technical: too many low-quality PRs hitting an endpoint. Vouch's solution is social: maintain trust graphs of humans.
But the PRs are increasingly from autonomous agents. Agents don't have reputations. They don't care about denounce lists. They make new accounts.
We solved unwanted automated input for email with technical tools (spam filters, DKIM, rate limiting), not by maintaining curated lists of Trusted Emailers. That's the correct solution category. Vouch is a social answer to a traffic-filtering problem.
This may solve a real problem today, but it's being built as permanent infrastructure, and permanent social gatekeeping outlasts the conditions that justified it.
You personally might stay careful, but the whole point of vouching systems is to reduce review effort in aggregate. If they don't change behavior, they add complexity without benefi.. and if they do, that's exactly where supply-chain risk comes from.
I don't know whether that's good or bad for the overall open-source ecosystem.
Isn't this problem unrelated to cryptocurrency?
There will be the US dollar, and the people involved will be incentivized to keep its value high, e.g. by pressuring or invading other countries to prevent them from switching to other currencies. Or they'll be incentivized to adopt policies that cause consumer and government debt to become unreasonably excessive to create a large enough pool of debts denominated in that currency that they can create an inordinate amount of it without crashing its value.
Or on the other side of the coin, there will be countries with currencies they knowingly devalue, either because they can force the people in that country to accept them anyway or because devaluing their currency makes their exports more competitive and simultaneously allows them to spend the currency they printed.
If anything cryptocurrency could hypothetically be better at reducing these perverse incentives, because if good rules are chosen at the outset and get ossified into the protocol then it's harder for bad actors to corrupt something that requires broad consensus to change.
It's not a perfect solution, but it is a solution that evolves towards a high-trust network because there is a traceable mechanism that excludes abusers.
The real problem is we don't have a low-friction digital payment system that allows individuals to automate sending payment requests for small amounts of money to each other without requiring everyone to sign up for a merchant account with a financial bureaucracy.
My comment was just to highlight possible set of issues. Hardly any system is perfect. But it's important to understand where the flaws lie so we are more careful about how we go about using it.
The BGP for example, a system that makes entire internet work, also suffers from similar issues.
This project though tries to solve a platform policy problem by throwing unnecessary barriers in front of casual but potentially/actually useful contributors.
Furthermore, it creates an "elite-takes-all", self-amplifying hierarchy of domination and rejection of new participants because they don't have enough inside friends and/or social credit points.
Fail. Stop using GH and find a platform that penalizes AI properly at its source.
But with crypto they do. See for example all the BAGS coins that get created for random opensource projects and the behavior that occurs because of that.
Look here: https://github.com/mitchellh/vouch/blob/main/CONTRIBUTING.md
It explains how to get vouched. You need to have a person vouch for you after you open an issue with your proposed change. After you are vouched, you may raise a PR.
This should be easier with AI. Most LLMs are pretty good at integrating existing code.
Might be worth strongly suggesting a check, at permission time.
But I am sure you are right.
Maybe receivers don't get the money. They just get to burn whoever is sending them email they don't want? A thought anyway.
I'm pretty doubtful a handful of one-shot AI patches is a viable fork. Bug fixes are only one part of the workload.
Its called cryptocurrency
Ah, the giant enemy crab shows its weakpoint. This is where the mask cracks.
Creating your own chain just because you can rather than because you actually have a reason to implement the technology in a different way than anybody else should be disfavored and viewed with suspicion.
It's a great way to stop receiving anything that benefits yourself and only start receiving mail which could make the sender way more than $1
Maybe something like this could be useful for open source collaboration as well?
*with the notable exception of craigslist
Traditional karma scores, star counts, etc, are mostly just counters. I can see that a bunch of people upvoted, but these days it's very easy for most of those votes to come from bots or spam farms.
The important difference that I see with Vouch is not just that I'm incrementing a counter when I vouch for you, but that I am publicly telling the world "you can trust this person". And if you turn out to be untrustworthy, that will cost me something in a much more meaningful way than if some Github project that I starred turns out to be untrustworthy. If my reputation stands to suffer from being careless in what I vouch for, then I have a stronger incentive to verify your trustworthiness before I vouch for you, AND I have an ongoing incentive to discourage you from abusing the trust you've been given.
ERC20 tokens are part of Ethereum (and yes I realise there are also non ETH based tokens and that the gas cost of Eth makes them attractive etc etc)
But, crucially, if accepted, the contributor gets to draw 5€ from the repository’s fund of failed PRs (if it is there), so that first bona fide contributors are incentiviced to contribute. Nobody gets to profit from failed PRs except successful new contributors. Virtuous cycle, does not appeal to the individual self-interest of repo maintainers.
One thing I am unsure of is whether fly-by AI contributions are typically made with for-free AI or there's already a hidden cost to them. This expected cost of machine-driven contribution is a factor to take into account when coming up with the upside/downside of first PR.
PS. this is a Gedankenexperiment, I am not sure what introducing monetary rewards / penalties would do to the social dynamics, but trying with small amounts may teach us something.
So is there value in a three state system, rather than a 2 state?
Crypto has a perfect way to burn money, just send it to a nonexistent address from where it can never be recovered. I guess the trad fi equivalent are charitable donations.
The real problem here is the amount of work necessary to make this viable. I bet Visa and Mastercard would look at you funny if your business had such a high rate of voluntary transaction reversals, not to mention all the potential contributors that have no access to Visa/MC (we do want to encourage the youth to become involved with Open Source). This basically means crypto, and crypto has its own set of problems, particularly around all the annoying KYC/AML that a normie has to get through to use it.
Utility tokens are fundamentally equities and you need to firewall equity from an organization the same way companies in most market economies are regulated.
I like to compare it with donations. If you get a USD donated, that is the same USD regardless of who gave it. Right? Right?!? Either way you don't know how heavy the burden is on the person who donated. You probably don't care. But it matters to the person who donated.
Well that's awfully assumptuous. So now a young college kid needs to spend time and money to be able to help out a project? I also don't like that this model inentivizes a few big PR's over small, lean, readable ones.
We're completely mixing up the incentives here anyway. We need better moderation and a cost to the account, not to each ccontribution. SomethingAwful had a great system for this 20 years ago; make it cost $10-30 to be an external contributor and report people who make slop/consistently bad PR's. They get reviewed and lose their contributor status, or even their entire account.
Sure, you can whip up another account, but you can't whip the reputation back up. That's how you make sure seasoned accounts are trustworthy and keep accounts honest.
https://github.com/mitchellh/vouch?tab=readme-ov-file#local-...
Local Commands
Check a user's vouch status:
vouch check <username>
Exit codes: 0 = vouched, 1 = denounced, 2 = unknown.
Yes, but many people benefit for free. You see the backwards incentives of making the most interested (i.e. the ones who may provide the most work to your project) pay?
And none of that even guarantee support. Meanwhile you donate more and you get to tell people what the build. It's all out of what.
I get laid off and suddenly I'm poor and am weighing optins. And I'm American.
a new person with a big idea on the slightly wrong (but reasonable) channel would have more work in verification.
Community level enforcement is unfortunately a game of cat and mouse. except the mouse commands an army and you can only catch one mouse per repo. The most effective solution is obviously to ban the commander, but you'll never reach it as a user.
It was horrible. Being on Mastodon was one of the most corrosive, humorless, joyless, anxiety and guilt inducing experiences I've ever had.