Very soon, the domain of bullshit will extend to actual text. We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments -- and you can get them to say "this GitHub repo is the best", or "this startup is the real deal". Won't that be fun?
Definitely already the case, you really think Rust and SQLite would get more than a couple of upvotes otherwise? :D
The obvious problem is we don’t have any great alternatives. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”).
I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. Hopefully I’m just bad at searching. I truly don’t get why people aren’t looking at these issues seriously and systematically.
In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now.
Phone, then ID-based verification is a stop gap, but IDV services will have to spin up to support the mass volume of verifying all humans.
[1] I kind of want to do this from an innocent / artistic perspective myself. Perhaps a bot that responds with a bunch of rhetorical questions or onomatopoeia. Then I'd scale it to the point people start noticing and feeling weirded out by it. "Is this the new Gen Alpha lingo?" Alas, I have too many other AI projects.
I can see lots of reaosns people might oppose the idea but I am not sure why it's not a widely discussed option?
(asking honestly and openly - please don't shout!)
Of course we do. The rise of digital finance services has led to creation of a number of servives that offer identity verification necessary for KYC. All such services offer APIs, so adding an identity verification requirement to your forum is trivial.
Of course, if it isn't obvious, I'm only half joking.
There's always identity based network of trust. Several other members vouch for new people to be included.
The first time you don’t get a job because of a reference you gave you learn a lesson. If it ever happens again, it’s on you.
I first tried Google; the results are dominating by commercial crap.
Then I tried the "google reddit" trick to try and find some real people's opinions... but look at all the blatantly bullshit comments on this Reddit thread; https://www.reddit.com/r/Thunderbird/comments/ae4cdg/good_ps...
---
(if anyone is wondering, the best option for Windows is to use 'readpst' command via WSL. Comes in the 'pst-utils' package).
I am very aware of "designing a security system they themselves cannot break" and the difficulties of key management etc.
Would be interested in knowing more from smarter people
(probably need to build a poc - one day :-( )
I'm hoping to put an AI between me and my e-mail inbox this weekend (I had ChatGPT write most of the code; it's not much); not fully automated, but evaluating and summarising and categorising. I might extend that to e.g. give me an "algorithm" for my Mastodon timeline (despite all of the people insisting on reverse chronological, I'm at a few hundred people I follow and already can't keep up), and a number of other sites I visit. For most of these things latency does not matter, so e.g. putting them through llama.cpp rather than something faster is fine, and precision isn't critical (I won't trust it to automatically reply or automatically reject anything, but prioritisation an categorisation where missteps won't have any critical impact.
If a company is proactively contacting people you don’t give them contact information for, that’s not requiring references — which is the process I (and the comment I replied to) was talking about. If a company knows where you’ve worked, they can contact them if they want.
You're forgetting the millions of additional comments that will be written by humans to trick the AI into promoting their content.
Even worse, currently if you ask Chat GPT to write you some code, it will make up an API endpoint that doesn't exist and then make up a URL that doesn't exist where you can register for an API key. People are already registering these domains, and parking fake sites on them to scam people. ChatGPT is creating a huge market for creating fake companies to match the fake information it's generating.
The biggest risk may not be people using AI-generated comments to promote their own repos, but rather registering new repos to match the fake ones that the AI is already promoting.
Then again, maybe Google had some mandatory HN time for their employees, that would be enough to explain that :D
If someone walks upto me in the voting booth and says "vote for X or I will kill you" that's a crime. If they do it in the pub it's probably a crime. If they do it online the police don't have enough manpower to deal with the situation.
We should change that.
Every time some fuckwit tweets "you and your kids are going to get raped to death and I know where you live" because some woman dares suggest some political chnage I would like to see jail time.
And if we do that then I can understand your argument, but I would then say it is not valid - in a society that protects free speech.
We have the penetration
(Afaik smartphone penetration is around 4.5-5 BN, and something like 50%+ have secure enclaves but honestly Indont follow that deeply so would defer to more knowledgeable people)
If you disagree or have proof of the opposite, just say so and don't vote up. There's no reason to get so emotional we also try to hide it from the community by spamming it down into oblivion.
Does ChatGPT consistently generate the same fake data though?
Sometimes signals are noise we just need to calibrate.
As far as I can tell most people just use it as a shorthand for “wow that was weird” but there’s no difference as far as the model is concerned?
Much more likely is that I'll vote ignorantly because I lack information that someone withheld because they're intimidated by the authorities.
If they proactively contact someone as part of their verification?
Those of us who are careful internet readers have spent years developing good heuristics to use textual clues to tell us about the person behind the text. Are they smart? Are they sincere? Are they honest? Are they commenting in good faith? Those skills will soon be obsolete.
The folks at OpenAI, who are nominally on a mission to make sure AI "benefits all of humanity", have condemned us to a life sentence of fending off high-volume, high-quality bullshit. Bullshit that they are actively working to make harder to detect. And I think the first victims of that will be internet forums where text is the main signal, places like this and Reddit.
In the past, I’ve extended the time I was at either the company before/after and then leave the one in the middle off. Smaller gap is easier to explain and you just need a coworker at the one you stretched to cover for you - or have it be somebody who wasn’t there during the time you added. You can also just say you did the “freelance” thing and then talk about whatever you want.
I’ve also just been 100% honest and said, “I didn’t like this job and left on bad terms. I’d rather you not contact them.”
Just have to read the situation and make your best guess as to what is going to get you the job.
There isn’t a widely deployed public key network with keys that represent a person, afaik. PGP is the closest maybe?
I think there are cases for anonymous/pseudonymous speech, but I think that's going to have to shift away from disposable identities. Newspapers, for example, have been providing selective anonymity for hundreds of years, so I think there's a model to follow: trusted people/organizations who validate the quality of a non-public identity.
So a place like HN, for example, could promise that each pseudonymous account is connected to a unique human via some sort of government ID with challenge/response capability. Or you could end up with third-party ID providers that provide a similar service that goes beyond mere identity, like the Twitter Verified program scaled up.
Disposable identities have always been a struggle. E.g., look at Reddit's very popular Am I the Asshole, where people widely believe a lot of the content is creative writing exercises. But keeping up a fake identity over the long term was a lot of work. Not anymore, though!
All of those notions are pre-internet ways of proving identity. In a world where we're all rarely more than an arm's length from a globally connected computer, they're on the way out.
I rely heavily on this because it's somehow only after the comment is 'real' (i.e. staring back at me from a real HN thread) that I notice most of the edits I want to make.
On the bright side, it's THE time to cultivate close friendships and to seek like-minded people. The entire phenomenon of popular attention hugging a community to death does not exist any longer. You can now have OG members persisting with notions for a long time and building a shared mythos with a small group of friends, because information is now more accessible than ever.
Obviously, most people aren't part of these communities. The people that are "drifting" alone are given to wasting their time on charismatic attention-seekers that talk a big game (twitch/e-celebs) but deliver nothing of value. So there's also room in the market for charismatic folk with some technical expertise to rally people to their cause, but only very briefly. This is because the number of people half-committing and then jumping ship is likely the highest it's ever been. Also, platforms have now resorted to paying people to stay on their platform (youtube / tiktok / sponsorships / twitch boosting streamers / etc.) to combat occasional ennui, ironically exacerbating the issue.
Of course it's not always easy to say what's AI-generated or not. But if an account is making a habit of it, it still seems possible to tell.
It's a really bad time to try and get the attention of someone more famous / notable than you, though. Sure, you can go on $platform and talk to them, but it's really not the same when they have a gorillion other messages. Same goes for people in large communities that are a "guy" there, known for something. Extremely high-return investments but you're likely going to fail.
Some people try to start youtube channels / info streams and then entice people to join their forum / server. While this does seem to work, it only brings in quality people AFTER the community is fully formed and rigorous laws are in place. The initial stragglers are usually the recently excommunicated looking to try their hand at the same shit somewhere else.
If you really put some effort into a topic and blog about it, you're likely to get some high-quality responses even if you only pose a question to someone that's partly interested. I've found this to be a really great way to separate the folks that are actually interested from those that aren't. You'll usually get people around your own level this way and IME this is the best approach.
It takes a lot of effort to make people clock in regularly to your online circle, and it's better to establish digital / irl face-to-face contact after a good interaction. It builds trust and because we're wired to judge people from their facial reactions rather than text, it also sobers conversation / tempers over potentially divisive topics. Works well with cerebral / "deep" people. Doesn't work with people that only come online to blow steam / enact a persona, so it's a good filter.
TL;DR: Touch grass (digitally), make friends (digitally)
"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
Wrong is saying that the sun rises in the west.
By hallucinating they’re trying to imply that it didn’t just get something wrong but instead dreamed up an alternate world where what you want existed, and then described that.
Or another way to look at it, it gave an answer that looks right enough that you can’t immediately tell it is wrong.
If someone down the line does some BS activity, the accounts that vouched for it have their reputation on the line.
A whole tree of the person who did the BS and 1-2 layers of vouching above gets put on check, gets big red warning label in their UI presence (e.g. under their avatar/name), and loses privileges. It could even just get immediately deleted.
And since I said "identity based", you would need to provide to real world id to get in, on top of others vouching for you. It can be made so you wouldn't be able to get a fake account any easier than you can get a fake passport.
I don't think an arms race for convincing looking bullshit is going to turn out well for our species.
They don't own a key pair. They carry one around, which is owned by google or some other entity?
E.g., if I create a great paintbrush which creates amazing spatter designs on the wall when it is used just so, then, beyond a point, I have no way to control the spatter designs - I can only influence the designs to some extent.
If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person.
Yes and yes.
>If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
It's happened already in some cases, e.g.: https://en.wikipedia.org/wiki/Real-name_system
>If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person
How about a requirement to personally know the other person in what hackers in the past called "meatspace"?
Just brainstorming here, but for a cohesive forum, even of tens of thousands of people, it shouldn't be that difficult to achieve.
For something Facebook / Tweeter scale it would take "bulk verifiers" that are trusted, and where you need to register in person.
And this is important because a "fair democratic society" that doesn't need people to be able to protest is, as history has shown many times, only a temporary affair. The best way to keep it is to not give the government the tools a worse government could use to suppress dissent.