zlacker

[return to "Vouch"]
1. femto1+XT1[view] [source] 2026-02-08 20:38:11
>>chwtut+(OP)
Users already proven to be trustworthy in one project can automatically be assumed trustworthy in another project, and so on.

I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.

◧◩
2. tgsovl+0V1[view] [source] 2026-02-08 20:46:16
>>femto1+XT1
Based on the description, I suspect the main goal isn't "trust" in the security sense, it's essentially a spam filter against low quality AI "contributions" that would consume all available review resources without providing corresponding net-positive value.
◧◩◪
3. btown+3n2[view] [source] 2026-02-09 00:25:40
>>tgsovl+0V1
Per the readme:

> Unfortunately, the landscape has changed particularly with the advent of AI tools that allow people to trivially create plausible-looking but extremely low-quality contributions with little to no true understanding. Contributors can no longer be trusted based on the minimal barrier to entry to simply submit a change... So, let's move to an explicit trust model where trusted individuals can vouch for others, and those vouched individuals can then contribute.

And per https://github.com/mitchellh/vouch/blob/main/CONTRIBUTING.md :

> If you aren't vouched, any pull requests you open will be automatically closed. This system exists because open source works on a system of trust, and AI has unfortunately made it so we can no longer trust-by-default because it makes it too trivial to generate plausible-looking but actually low-quality contributions.

===

Looking at the closed PRs of this very project immediately shows https://github.com/mitchellh/vouch/pull/28 - which, true to form, is an AI generated PR that might have been tested and thought through by the submitter, but might not have been! The type of thing that can frustrate maintainers, for sure.

But how do you bootstrap a vouch-list without becoming hostile to new contributors? This seems like a quick way for a project to become insular/isolationist. The idea that projects could scrape/pull each others' vouch-lists just makes that a larger but equally insular community. I've seen well-intentioned prior art in other communities that's become downright toxic from this dynamic.

So, if the goal of this project is to find creative solutions to that problem, shouldn't it avoid dogfooding its own most extreme policy of rejecting PRs out of hand, lest it miss a contribution that suggests a real innovation?

◧◩◪◨
4. tgsovl+zt2[view] [source] 2026-02-09 01:21:57
>>btown+3n2
I suspect a good start might be engaging with the project and discussing the planned contribution before sending a 100kLOC AI pull request. Essentially some signal that the contributor intends to be a responsible AI driver not just a proxy for unverified garbage code.
◧◩◪◨⬒
5. johnny+D93[view] [source] 2026-02-09 08:47:04
>>tgsovl+zt2
That's the most difficult part oftentimes. People are busy and trying to join these conversations as someone green is hard unless you already have specifically domain knowledge to seek (which requires either a job doing that specific stuff or other FOSS contributions to point to).
[go to top]