zlacker

[return to "Tracking the Fake GitHub Star Black Market"]
1. perihe+ca[view] [source] 2023-03-18 09:48:20
>>kaeruc+(OP)
Goodhart's law: if you rely on a social signal to tell you what's good, you'll break that signal.

Very soon, the domain of bullshit will extend to actual text. We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments -- and you can get them to say "this GitHub repo is the best", or "this startup is the real deal". Won't that be fun?

◧◩
2. klabb3+ne[view] [source] 2023-03-18 10:45:09
>>perihe+ca
Content based auto moderation has been shitty since it’s inception. I don’t like that GPT will cause the biggest flood of shit mankind has ever seen, but I am happy that it will kill these flawed ideas about policing.

The obvious problem is we don’t have any great alternatives. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”).

I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. Hopefully I’m just bad at searching. I truly don’t get why people aren’t looking at these issues seriously and systematically.

In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now.

◧◩◪
3. coldte+Nk[view] [source] 2023-03-18 11:55:20
>>klabb3+ne
>The obvious problem is we don’t have any great alternatives.

There's always identity based network of trust. Several other members vouch for new people to be included.

◧◩◪◨
4. wpietr+jQ[view] [source] 2023-03-18 16:12:43
>>coldte+Nk
How would you imagine that applying here? If fake accounts are at least as convincing as real ones, then it seems like trust networks would be quickly prone to corruption as the fake accounts gain enough of a foothold to start recommending each other.
◧◩◪◨⬒
5. coldte+2I1[view] [source] 2023-03-18 22:09:12
>>wpietr+jQ
On a network started by 2-3-10 people, the first new members would need to be vouched by a percentage of those to get in - and so on.

If someone down the line does some BS activity, the accounts that vouched for it have their reputation on the line.

A whole tree of the person who did the BS and 1-2 layers of vouching above gets put on check, gets big red warning label in their UI presence (e.g. under their avatar/name), and loses privileges. It could even just get immediately deleted.

And since I said "identity based", you would need to provide to real world id to get in, on top of others vouching for you. It can be made so you wouldn't be able to get a fake account any easier than you can get a fake passport.

◧◩◪◨⬒⬓
6. wpietr+Cj4[view] [source] 2023-03-19 21:04:43
>>coldte+2I1
Are you talking about in-person verification and vouching? Or can it be digitally mediated?

If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.

If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person.

◧◩◪◨⬒⬓⬔
7. coldte+yq4[view] [source] 2023-03-19 21:50:21
>>wpietr+Cj4
>Are you talking about in-person verification and vouching? Or can it be digitally mediated?

Yes and yes.

>If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.

It's happened already in some cases, e.g.: https://en.wikipedia.org/wiki/Real-name_system

>If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person

How about a requirement to personally know the other person in what hackers in the past called "meatspace"?

Just brainstorming here, but for a cohesive forum, even of tens of thousands of people, it shouldn't be that difficult to achieve.

For something Facebook / Tweeter scale it would take "bulk verifiers" that are trusted, and where you need to register in person.

[go to top]