zlacker

[parent] [thread] 97 comments
1. perihe+(OP)[view] [source] 2023-03-18 09:48:20
Goodhart's law: if you rely on a social signal to tell you what's good, you'll break that signal.

Very soon, the domain of bullshit will extend to actual text. We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments -- and you can get them to say "this GitHub repo is the best", or "this startup is the real deal". Won't that be fun?

replies(19): >>GlumWo+k >>siva7+n >>einpok+N1 >>iLoveO+X3 >>klabb3+b4 >>robert+h4 >>groest+s6 >>Nowado+W8 >>charli+ia >>is_tru+na >>vidarh+oe >>Alex39+rf >>dorian+wg >>rwalla+Fk >>precom+tt >>greesi+7w >>soheil+2x >>wpietr+mF >>veheme+WO
2. GlumWo+k[view] [source] 2023-03-18 09:51:23
>>perihe+(OP)
The scary part is that this doesn't seem too far off, with the current proliferation of large language models like the GPTs..
replies(1): >>rzzzt+31
3. siva7+n[view] [source] 2023-03-18 09:51:53
>>perihe+(OP)
Who says this isn't already happening?
replies(3): >>echelo+M6 >>ChrisK+rc >>dang+ZJ
◧◩
4. rzzzt+31[view] [source] [discussion] 2023-03-18 10:00:03
>>GlumWo+k
Parent was definitely not referring to these at all /s
replies(1): >>perihe+91
◧◩◪
5. perihe+91[view] [source] [discussion] 2023-03-18 10:01:25
>>rzzzt+31
(I ninja-edited my comment in the first minute; the parent might have responded to a less clear version, since they posted at +3 minutes. I added "AI" in a revision).
replies(3): >>rzzzt+o1 >>quickt+S1 >>dang+EJ
◧◩◪◨
6. rzzzt+o1[view] [source] [discussion] 2023-03-18 10:04:52
>>perihe+91
OK, sounds reasonable. I didn't see the edit either, was just thinking about the myriad of LLM articles on the front page recently.
7. einpok+N1[view] [source] 2023-03-18 10:10:20
>>perihe+(OP)
Your comment is the best. It's the real deal!
replies(1): >>ryan69+x3
◧◩◪◨
8. quickt+S1[view] [source] [discussion] 2023-03-18 10:11:03
>>perihe+91
You sound way too human to be an AI then
◧◩
9. ryan69+x3[view] [source] [discussion] 2023-03-18 10:36:15
>>einpok+N1
This comment summarizes it best. We need more discussion like this!
10. iLoveO+X3[view] [source] 2023-03-18 10:41:19
>>perihe+(OP)
> Very soon, the domain of bullshit will extend to actual text. We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments -- and you can get them to say "this GitHub repo is the best", or "this startup is the real deal". Won't that be fun?

Definitely already the case, you really think Rust and SQLite would get more than a couple of upvotes otherwise? :D

replies(1): >>wongar+si
11. klabb3+b4[view] [source] 2023-03-18 10:45:09
>>perihe+(OP)
Content based auto moderation has been shitty since it’s inception. I don’t like that GPT will cause the biggest flood of shit mankind has ever seen, but I am happy that it will kill these flawed ideas about policing.

The obvious problem is we don’t have any great alternatives. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”).

I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. Hopefully I’m just bad at searching. I truly don’t get why people aren’t looking at these issues seriously and systematically.

In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now.

replies(3): >>lifeis+I8 >>Andrew+da >>coldte+Ba
12. robert+h4[view] [source] 2023-03-18 10:46:54
>>perihe+(OP)
Maybe we need a social network based on physical exchange of trust.
replies(1): >>api+La
13. groest+s6[view] [source] 2023-03-18 11:14:08
>>perihe+(OP)
Next keyword: market of lemons. If you can't rely on said signals anymore, you must treat every item the same (untrusted), which drives out the legitimate players from the market. We have a lot of lemon markets, we can probably infer from them what the social result will be..
◧◩
14. echelo+M6[view] [source] [discussion] 2023-03-18 11:17:39
>>siva7+n
Reddit better hold their IPO soon or they'll get caught up in this. Pretty soon there will be dozens of different GPT/LLM-powered Reddit spam bots on Github. Some of them no doubt for political trolling. [1]

Phone, then ID-based verification is a stop gap, but IDV services will have to spin up to support the mass volume of verifying all humans.

[1] I kind of want to do this from an innocent / artistic perspective myself. Perhaps a bot that responds with a bunch of rhetorical questions or onomatopoeia. Then I'd scale it to the point people start noticing and feeling weirded out by it. "Is this the new Gen Alpha lingo?" Alas, I have too many other AI projects.

replies(1): >>siva7+Ma
◧◩
15. lifeis+I8[view] [source] [discussion] 2023-03-18 11:35:40
>>klabb3+b4
I am unclear why a reasonable digital ID (probably government ID card style) plus rate limits is not going to be effective.

I can see lots of reaosns people might oppose the idea but I am not sure why it's not a widely discussed option?

(asking honestly and openly - please don't shout!)

replies(5): >>creaki+e9 >>nprate+g9 >>ipaddr+bi >>tbrown+yk >>wpietr+3I
16. Nowado+W8[view] [source] 2023-03-18 11:37:55
>>perihe+(OP)
You can do it already. It's a normal order for a copywriter, nobody will bat an eye when you post an offer. It costs cents/dollars per 1000 words instead of fraction of a cent, but that's not exactly outside of reach of a funded startup.
◧◩◪
17. creaki+e9[view] [source] [discussion] 2023-03-18 11:41:47
>>lifeis+I8
Closest example I know of is Korean internet. It is almost nigh impossible to get an account in major websites without SSN and a phone number. Despite this, there are still countless bots and scammers that uses hacked or leaked personal data. So I’m not sure if it would be that effective
replies(1): >>lifeis+Fc
◧◩◪
18. nprate+g9[view] [source] [discussion] 2023-03-18 11:42:08
>>lifeis+I8
Because the only way it'd work is if it was mandatory (because of point 2); it'd then be extended to porn sites to protect the children. That means politicians browsing history on pornhub would also be recorded and inevitably leaked when they get hacked.
◧◩
19. Andrew+da[view] [source] [discussion] 2023-03-18 11:52:21
>>klabb3+b4
> The obvious problem is we don’t have any great alternatives.

Of course we do. The rise of digital finance services has led to creation of a number of servives that offer identity verification necessary for KYC. All such services offer APIs, so adding an identity verification requirement to your forum is trivial.

Of course, if it isn't obvious, I'm only half joking.

20. charli+ia[view] [source] 2023-03-18 11:52:51
>>perihe+(OP)
I hope it breaks the current system of requiring references in job search as well
replies(1): >>paulco+Ga
21. is_tru+na[view] [source] 2023-03-18 11:53:35
>>perihe+(OP)
I'm sure it's already happening in the "books" threads
◧◩
22. coldte+Ba[view] [source] [discussion] 2023-03-18 11:55:20
>>klabb3+b4
>The obvious problem is we don’t have any great alternatives.

There's always identity based network of trust. Several other members vouch for new people to be included.

replies(3): >>eterna+xd >>groest+Me >>wpietr+7G
◧◩
23. paulco+Ga[view] [source] [discussion] 2023-03-18 11:56:21
>>charli+ia
This system is already essentially broken. Either you worked at a large business that only gives out dates of employment and job title by policy or you are in complete control of who the hiring company talks to.

The first time you don’t get a job because of a reference you gave you learn a lesson. If it ever happens again, it’s on you.

replies(1): >>asmor+Td
◧◩
24. api+La[view] [source] [discussion] 2023-03-18 11:57:16
>>robert+h4
That’s mostly what the person to person phone system was.
◧◩◪
25. siva7+Ma[view] [source] [discussion] 2023-03-18 11:57:25
>>echelo+M6
The Anti-AI\GPT-Detection will soon be a multi-billion dollar industry
replies(1): >>asmor+Hd
◧◩
26. ChrisK+rc[view] [source] [discussion] 2023-03-18 12:12:58
>>siva7+n
I just tried to find a FOSS tool for converting MS Outlook .pst file to .mbox.

I first tried Google; the results are dominating by commercial crap.

Then I tried the "google reddit" trick to try and find some real people's opinions... but look at all the blatantly bullshit comments on this Reddit thread; https://www.reddit.com/r/Thunderbird/comments/ae4cdg/good_ps...

---

(if anyone is wondering, the best option for Windows is to use 'readpst' command via WSL. Comes in the 'pst-utils' package).

replies(2): >>siva7+Xc >>deafpo+bg
◧◩◪◨
27. lifeis+Fc[view] [source] [discussion] 2023-03-18 12:14:27
>>creaki+e9
I am thinking more like webauthn - but where I own a key pair, and I go to post office with my passport, they give me a nonce and prove that my it's my key pair then they post that public key is definitely me. I then can use that posting as my "username" and any challenge response includes the public key so they know that only I could be signing up

I am very aware of "designing a security system they themselves cannot break" and the difficulties of key management etc.

Would be interested in knowing more from smarter people

(probably need to build a poc - one day :-( )

replies(1): >>bombol+8q
◧◩◪
28. siva7+Xc[view] [source] [discussion] 2023-03-18 12:17:08
>>ChrisK+rc
So a GPT bot instead of the human commenters would make reddit more useful in the end, this is what you're saying right?
replies(1): >>ChrisK+pd
◧◩◪◨
29. ChrisK+pd[view] [source] [discussion] 2023-03-18 12:21:07
>>siva7+Xc
How so? The commercial organisations will be able to use a GPT bot to provide more believable comments, at greater scale, and cheaper.
◧◩◪
30. eterna+xd[view] [source] [discussion] 2023-03-18 12:22:10
>>coldte+Ba
Maybe even push that a level higher and have org to org vouching as well (so it can scale and reputation propagates social bubbles.) Bootstrapping remains somewhat an issue.
replies(1): >>wongar+Lg
◧◩◪◨
31. asmor+Hd[view] [source] [discussion] 2023-03-18 12:23:15
>>siva7+Ma
And it'll silently remove your real posts too, faster than the horrible moderation on reddit ever could!
◧◩◪
32. asmor+Td[view] [source] [discussion] 2023-03-18 12:24:30
>>paulco+Ga
What's really an alternative. At least where I live, a multi-year gap in your CV is going to set off more red flags than an honest "It didn't work out between us".
replies(1): >>paulco+2f
33. vidarh+oe[view] [source] 2023-03-18 12:29:23
>>perihe+(OP)
We'll be back to the 1990's "software agents" craze take two: Needing AI driven agents that seek out and index and evaluate content on our behalf, and seek to negotiate with each other for recommendations with currency being trust based on how "your" agent evaluated prior results.

I'm hoping to put an AI between me and my e-mail inbox this weekend (I had ChatGPT write most of the code; it's not much); not fully automated, but evaluating and summarising and categorising. I might extend that to e.g. give me an "algorithm" for my Mastodon timeline (despite all of the people insisting on reverse chronological, I'm at a few hundred people I follow and already can't keep up), and a number of other sites I visit. For most of these things latency does not matter, so e.g. putting them through llama.cpp rather than something faster is fine, and precision isn't critical (I won't trust it to automatically reply or automatically reject anything, but prioritisation an categorisation where missteps won't have any critical impact.

◧◩◪
34. groest+Me[view] [source] [discussion] 2023-03-18 12:34:41
>>coldte+Ba
I've mentioned a "market of lemons" elsewhere in this thread. One such market is the market for malware and stolen credit card details. One result of the market being broken: serious criminals restrict themselves to very small (company like) social circles and invite only forums. One signal of trust that remained very long: a very short ICQ number. You don't want to burn such a handle with a bad trade, so trust was given upfront.
◧◩◪◨
35. paulco+2f[view] [source] [discussion] 2023-03-18 12:36:36
>>asmor+Td
Don’t give them your boss’s name. Give them a coworker’s name. Give them a friend’s name and have them lie for you.

If a company is proactively contacting people you don’t give them contact information for, that’s not requiring references — which is the process I (and the comment I replied to) was talking about. If a company knows where you’ve worked, they can contact them if they want.

replies(1): >>moneyw+8E
36. Alex39+rf[view] [source] 2023-03-18 12:39:54
>>perihe+(OP)
> We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments

You're forgetting the millions of additional comments that will be written by humans to trick the AI into promoting their content.

Even worse, currently if you ask Chat GPT to write you some code, it will make up an API endpoint that doesn't exist and then make up a URL that doesn't exist where you can register for an API key. People are already registering these domains, and parking fake sites on them to scam people. ChatGPT is creating a huge market for creating fake companies to match the fake information it's generating.

The biggest risk may not be people using AI-generated comments to promote their own repos, but rather registering new repos to match the fake ones that the AI is already promoting.

replies(3): >>permo-+Jg >>fantod+Ju >>notabe+Rz1
◧◩◪
37. deafpo+bg[view] [source] [discussion] 2023-03-18 12:48:23
>>ChrisK+rc
I'm blind maybe, but what are the blatantly bullshit comments? The spam of PST to MBOX?
replies(2): >>SalmoS+Cq >>vageli+uv
38. dorian+wg[view] [source] 2023-03-18 12:51:49
>>perihe+(OP)
That's what Product Hunt has felt like for a long time—and LinkedIn too.
◧◩
39. permo-+Jg[view] [source] [discussion] 2023-03-18 12:53:03
>>Alex39+rf
I feel like you’re overstating this as a long term issue. sure it’s a problem now, but realistically how long before code hallucinations are patched out?
replies(5): >>ptato+Ug >>lanter+Wg >>aent+Eo >>warent+2u >>trippi+Gy
◧◩◪◨
40. wongar+Lg[view] [source] [discussion] 2023-03-18 12:53:27
>>eterna+xd
One somewhat popular solution for bootstrapping is to allow people to buy in, paired with quickly banning those members in cases of rule violation. It's by no means perfect, but it puts a real price on abuse and thus reduces it a lot
◧◩◪
41. ptato+Ug[view] [source] [discussion] 2023-03-18 12:54:35
>>permo-+Jg
Nobody knows.
replies(1): >>permo-+Xg
◧◩◪
42. lanter+Wg[view] [source] [discussion] 2023-03-18 12:55:21
>>permo-+Jg
The black box nature of the model means this isn't something you can really 'patch out'. It's a byproduct of the way the system processes data - they'll get less frequent with targeted fine tuning and improved model power, but there's no easy solve.
replies(1): >>permo-+V51
◧◩◪◨
43. permo-+Xg[view] [source] [discussion] 2023-03-18 12:55:32
>>ptato+Ug
undoubtedly not long
◧◩◪
44. ipaddr+bi[view] [source] [discussion] 2023-03-18 13:06:55
>>lifeis+I8
If spam was your only problem now we have two spam and identity theft. Selling/obtaining identity information becomes very profitable and those working in the postal office must guard access like a bank vault.
replies(2): >>lifeis+rr >>wpietr+PI
◧◩
45. wongar+si[view] [source] [discussion] 2023-03-18 13:08:53
>>iLoveO+X3
Then how do you explain the Go hype HN went through just before the current rust hype? Where "[ordinary tool] in Go" was the formula for upvotes.

Then again, maybe Google had some mandatory HN time for their employees, that would be enough to explain that :D

◧◩◪
46. tbrown+yk[view] [source] [discussion] 2023-03-18 13:23:46
>>lifeis+I8
Anonymity is critical to free speech, because there exist bad actors who will resort to violence to suppress speech they don't like.
replies(1): >>lifeis+Nq
47. rwalla+Fk[view] [source] 2023-03-18 13:24:32
>>perihe+(OP)
This is the first time I've ever posted an XKCD link here, but I think the occasion calls for it.

https://xkcd.com/810/

◧◩◪
48. aent+Eo[view] [source] [discussion] 2023-03-18 13:59:51
>>permo-+Jg
Assuming those hallucinations are a thing to be patched out and not the core part of a system that works by essentially sampling a probability distribution for the most likely following word.
replies(1): >>permo-+Cef
◧◩◪◨⬒
49. bombol+8q[view] [source] [discussion] 2023-03-18 14:11:37
>>lifeis+Fc
> I own a key pair

Right there… it won't work with the general population.

replies(1): >>lifeis+0r
◧◩◪◨
50. SalmoS+Cq[view] [source] [discussion] 2023-03-18 14:14:44
>>deafpo+bg
Yeah they are almost all clearly spammy, broken english ads for paid software
◧◩◪◨
51. lifeis+Nq[view] [source] [discussion] 2023-03-18 14:15:54
>>tbrown+yk
But, and I understand the argument, that is a problem for IRL society / government to solve.

If someone walks upto me in the voting booth and says "vote for X or I will kill you" that's a crime. If they do it in the pub it's probably a crime. If they do it online the police don't have enough manpower to deal with the situation.

We should change that.

Every time some fuckwit tweets "you and your kids are going to get raped to death and I know where you live" because some woman dares suggest some political chnage I would like to see jail time.

And if we do that then I can understand your argument, but I would then say it is not valid - in a society that protects free speech.

replies(3): >>woile+hs >>tbrown+BB >>__Matr+VB
◧◩◪◨⬒⬓
52. lifeis+0r[view] [source] [discussion] 2023-03-18 14:17:50
>>bombol+8q
something like 2 billion people have a phone with a secure enclave capable of this in their pockets today - and they use it everyday for logins, payment and paying at the car park.

We have the penetration

(Afaik smartphone penetration is around 4.5-5 BN, and something like 50%+ have secure enclaves but honestly Indont follow that deeply so would defer to more knowledgeable people)

replies(2): >>klabb3+xH >>bombol+Or2
◧◩◪◨
53. lifeis+rr[view] [source] [discussion] 2023-03-18 14:21:55
>>ipaddr+bi
Then make it a banks job to guard the bank vaults - they need to earn that FDIC bailout money :-)
◧◩◪◨⬒
54. woile+hs[view] [source] [discussion] 2023-03-18 14:28:34
>>lifeis+Nq
Actually, there could be places where verified humans are required, and places where they are not.
55. precom+tt[view] [source] 2023-03-18 14:40:54
>>perihe+(OP)
Now is the time to cultivate friendships and to make networks that persist online, and are verified via irl meetups / contacts. People who pull that off now will be in much, much better shape in the future. GPT's output is apparent to a discernible eye right now, but according to the power law, it won't take much "novel" input to train upon to make that discernment useless. Then, the only internet community that could be dependably reliable would be your group of irl verified people.
replies(2): >>passwo+Du >>moneyw+qz
◧◩◪
56. warent+2u[view] [source] [discussion] 2023-03-18 14:45:30
>>permo-+Jg
Folks, doesn't it seem a little harsh to pile downvotes onto this comment? It's an interesting objection stimulating meaningful conversation for us all to learn from.

If you disagree or have proof of the opposite, just say so and don't vote up. There's no reason to get so emotional we also try to hide it from the community by spamming it down into oblivion.

replies(1): >>permo-+tc1
◧◩
57. passwo+Du[view] [source] [discussion] 2023-03-18 14:51:54
>>precom+tt
I would phrase it more as we're pretty much out of time to have initiated online-only relationships.
replies(1): >>precom+TJ
◧◩
58. fantod+Ju[view] [source] [discussion] 2023-03-18 14:52:47
>>Alex39+rf
> ChatGPT is creating a huge market for creating fake companies to match the fake information it's generating.

Does ChatGPT consistently generate the same fake data though?

replies(2): >>redeux+1M >>bombca+C01
◧◩◪◨
59. vageli+uv[view] [source] [discussion] 2023-03-18 14:58:31
>>deafpo+bg
Yes and if you look at the comment history of the posters in that thread, it is clear they are all spam accounts.
60. greesi+7w[view] [source] 2023-03-18 15:02:18
>>perihe+(OP)
How do you know we aren't already there?
61. soheil+2x[view] [source] 2023-03-18 15:09:56
>>perihe+(OP)
Stop making up laws. You'll do much more good dismantling existing ones. And non-social signals like # of commits, # of pull requests cannot be faked? We need signals among the noise.

Sometimes signals are noise we just need to calibrate.

◧◩◪
62. trippi+Gy[view] [source] [discussion] 2023-03-18 15:22:46
>>permo-+Jg
An aside: what do people mean when they say “hallucinations” generally? Is it something more refined than just “wrong”?

As far as I can tell most people just use it as a shorthand for “wow that was weird” but there’s no difference as far as the model is concerned?

replies(2): >>mlhpdx+oL >>bombca+p01
◧◩
63. moneyw+qz[view] [source] [discussion] 2023-03-18 15:28:28
>>precom+tt
Best methods for that? Local meetups?
replies(1): >>precom+pM
◧◩◪◨⬒
64. tbrown+BB[view] [source] [discussion] 2023-03-18 15:41:06
>>lifeis+Nq
That doesn't work so well when the government is one of the bad actors.
replies(1): >>lifeis+jI
◧◩◪◨⬒
65. __Matr+VB[view] [source] [discussion] 2023-03-18 15:43:12
>>lifeis+Nq
I'm far less worried about being intimidated into voting a certain way by someone who is avoiding the authorities online.

Much more likely is that I'll vote ignorantly because I lack information that someone withheld because they're intimidated by the authorities.

◧◩◪◨⬒
66. moneyw+8E[view] [source] [discussion] 2023-03-18 15:58:59
>>paulco+2f
What’s the solution for the latter point you mentioned?

If they proactively contact someone as part of their verification?

replies(1): >>paulco+JG
67. wpietr+mF[view] [source] 2023-03-18 16:06:46
>>perihe+(OP)
I mean, there have always been shills. What's changing now is the cost of shilling is dropping from dollars per comment to fractions of a cent. Troll farms used to be a lot of work to put together, but soon they'll be aaS.

Those of us who are careful internet readers have spent years developing good heuristics to use textual clues to tell us about the person behind the text. Are they smart? Are they sincere? Are they honest? Are they commenting in good faith? Those skills will soon be obsolete.

The folks at OpenAI, who are nominally on a mission to make sure AI "benefits all of humanity", have condemned us to a life sentence of fending off high-volume, high-quality bullshit. Bullshit that they are actively working to make harder to detect. And I think the first victims of that will be internet forums where text is the main signal, places like this and Reddit.

◧◩◪
68. wpietr+7G[view] [source] [discussion] 2023-03-18 16:12:43
>>coldte+Ba
How would you imagine that applying here? If fake accounts are at least as convincing as real ones, then it seems like trust networks would be quickly prone to corruption as the fake accounts gain enough of a foothold to start recommending each other.
replies(1): >>coldte+Qx1
◧◩◪◨⬒⬓
69. paulco+JG[view] [source] [discussion] 2023-03-18 16:16:39
>>moneyw+8E
Then you’re fucked if they check and the reference is bad and they care. Either you take your chances, leave it as a gap in your resume, or you make something up.

In the past, I’ve extended the time I was at either the company before/after and then leave the one in the middle off. Smaller gap is easier to explain and you just need a coworker at the one you stretched to cover for you - or have it be somebody who wasn’t there during the time you added. You can also just say you did the “freelance” thing and then talk about whatever you want.

I’ve also just been 100% honest and said, “I didn’t like this job and left on bad terms. I’d rather you not contact them.”

Just have to read the situation and make your best guess as to what is going to get you the job.

◧◩◪◨⬒⬓⬔
70. klabb3+xH[view] [source] [discussion] 2023-03-18 16:20:51
>>lifeis+0r
That’s not your identity, it’s an access token protected by an advanced lock screen (which is greatly useful, but not the same). If you lose your device, the way you get back into your accounts is your de-facto identity—usually it ranges between the email you used during signup to your govt id.

There isn’t a widely deployed public key network with keys that represent a person, afaik. PGP is the closest maybe?

◧◩◪
71. wpietr+3I[view] [source] [discussion] 2023-03-18 16:23:57
>>lifeis+I8
I expect that's where we're heading. But then, as somebody who writes online mostly under my own name, maybe I'm just biased. Come on in, the water's fine!

I think there are cases for anonymous/pseudonymous speech, but I think that's going to have to shift away from disposable identities. Newspapers, for example, have been providing selective anonymity for hundreds of years, so I think there's a model to follow: trusted people/organizations who validate the quality of a non-public identity.

So a place like HN, for example, could promise that each pseudonymous account is connected to a unique human via some sort of government ID with challenge/response capability. Or you could end up with third-party ID providers that provide a similar service that goes beyond mere identity, like the Twitter Verified program scaled up.

Disposable identities have always been a struggle. E.g., look at Reddit's very popular Am I the Asshole, where people widely believe a lot of the content is creative writing exercises. But keeping up a fake identity over the long term was a lot of work. Not anymore, though!

◧◩◪◨⬒⬓
72. lifeis+jI[view] [source] [discussion] 2023-03-18 16:25:41
>>tbrown+BB
My point is that if government is a bad actor, there is no recourse. We need a fair democratic society - it's on us to build one / keep it there
replies(1): >>accoun+W46
◧◩◪◨
73. wpietr+PI[view] [source] [discussion] 2023-03-18 16:29:01
>>ipaddr+bi
The paradigm of fixed identity information as proof is pretty obviously doomed. Just like how the 1970s concept of username/password as proof of identity is on its way out. Or credit card numbers alone being used to validate transactions.

All of those notions are pre-internet ways of proving identity. In a world where we're all rarely more than an arm's length from a globally connected computer, they're on the way out.

replies(1): >>lifeis+hJ2
◧◩◪◨
74. dang+EJ[view] [source] [discussion] 2023-03-18 16:32:37
>>perihe+91
If you want to, you can always set 'delay' in your profile to the number of minutes (up to 10) that you would like your comments to be visible only to you. This puts the stealth back in stealth editing. https://news.ycombinator.com/newsfaq.html

I rely heavily on this because it's somehow only after the comment is 'real' (i.e. staring back at me from a real HN thread) that I notice most of the edits I want to make.

◧◩◪
75. precom+TJ[view] [source] [discussion] 2023-03-18 16:33:57
>>passwo+Du
Agreed. It's very difficult now to build communities that have lasting impact, because everyone's saturated with info as-is. Contributions to niche communities now rely on a societal "outsider" status, which means there's basically a couple of people that contribute heavily and very few onlookers. Everything else is either gamified or comes from video games / gambling.

On the bright side, it's THE time to cultivate close friendships and to seek like-minded people. The entire phenomenon of popular attention hugging a community to death does not exist any longer. You can now have OG members persisting with notions for a long time and building a shared mythos with a small group of friends, because information is now more accessible than ever.

Obviously, most people aren't part of these communities. The people that are "drifting" alone are given to wasting their time on charismatic attention-seekers that talk a big game (twitch/e-celebs) but deliver nothing of value. So there's also room in the market for charismatic folk with some technical expertise to rally people to their cause, but only very briefly. This is because the number of people half-committing and then jumping ship is likely the highest it's ever been. Also, platforms have now resorted to paying people to stay on their platform (youtube / tiktok / sponsorships / twitch boosting streamers / etc.) to combat occasional ennui, ironically exacerbating the issue.

◧◩
76. dang+ZJ[view] [source] [discussion] 2023-03-18 16:34:35
>>siva7+n
If people see AI-generated comments on HN they should flag them and let us know at hn@ycombinator.com. HN is for humans to converse, and bots have never been allowed.

Of course it's not always easy to say what's AI-generated or not. But if an account is making a habit of it, it still seems possible to tell.

◧◩◪◨
77. mlhpdx+oL[view] [source] [discussion] 2023-03-18 16:40:50
>>trippi+Gy
Most people don’t understand the technology and maths at play in these systems. That’s normal, as is using familiar words that make that feel less awful. If you have a genuine interest in understanding how and why errant generated content emerges, it will take some study. There isn’t (in my opinion) a quick helpful answer.
replies(1): >>trippi+QU1
◧◩◪
78. redeux+1M[view] [source] [discussion] 2023-03-18 16:44:29
>>fantod+Ju
I have noticed that ChatGPT will give me a consistent output when the input is identical, but I haven’t done extensive research on this.
◧◩◪
79. precom+pM[view] [source] [discussion] 2023-03-18 16:47:13
>>moneyw+qz
Most tight, close-knit groups originate from shared mythos. These can be family, proximity, "same school year", "same college", "friend of best friend", etc. Online, you can find people that are interested in some niche topic (or elaboration of some popular topic to an absurd degree) and engage with them. Small newsletters are also a good way to get people talking. What most people don't do is return attention, aka reciprocate positively. This could also mean you'd have you write about unrelated things or maybe try to build a "business relationship" that would then progress if you invest some time and hope for the best.

It's a really bad time to try and get the attention of someone more famous / notable than you, though. Sure, you can go on $platform and talk to them, but it's really not the same when they have a gorillion other messages. Same goes for people in large communities that are a "guy" there, known for something. Extremely high-return investments but you're likely going to fail.

Some people try to start youtube channels / info streams and then entice people to join their forum / server. While this does seem to work, it only brings in quality people AFTER the community is fully formed and rigorous laws are in place. The initial stragglers are usually the recently excommunicated looking to try their hand at the same shit somewhere else.

If you really put some effort into a topic and blog about it, you're likely to get some high-quality responses even if you only pose a question to someone that's partly interested. I've found this to be a really great way to separate the folks that are actually interested from those that aren't. You'll usually get people around your own level this way and IME this is the best approach.

It takes a lot of effort to make people clock in regularly to your online circle, and it's better to establish digital / irl face-to-face contact after a good interaction. It builds trust and because we're wired to judge people from their facial reactions rather than text, it also sobers conversation / tempers over potentially divisive topics. Works well with cerebral / "deep" people. Doesn't work with people that only come online to blow steam / enact a persona, so it's a good filter.

TL;DR: Touch grass (digitally), make friends (digitally)

80. veheme+WO[view] [source] 2023-03-18 16:59:04
>>perihe+(OP)
Maybe more appropriately, Campbell's law:

"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

◧◩◪◨
81. bombca+p01[view] [source] [discussion] 2023-03-18 18:03:46
>>trippi+Gy
Wrong is saying 2+2 is five.

Wrong is saying that the sun rises in the west.

By hallucinating they’re trying to imply that it didn’t just get something wrong but instead dreamed up an alternate world where what you want existed, and then described that.

Or another way to look at it, it gave an answer that looks right enough that you can’t immediately tell it is wrong.

replies(1): >>permo-+nef
◧◩◪
82. bombca+C01[view] [source] [discussion] 2023-03-18 18:04:41
>>fantod+Ju
There was one company that had to put up a “our API can’t get location data from a phone number so stop asking, GPT lied” page.
◧◩◪◨
83. permo-+V51[view] [source] [discussion] 2023-03-18 18:39:25
>>lanter+Wg
this is clearly untrue. it’s an input, a black box, then an output. openai have 100% control over the output. they may not be able to directly control what comes out of the black box, but a) they can tune the model, and they undoubtedly will, and b) they can control what comes after the black box. they can—for example—simply block urls
replies(2): >>Sai_+NH2 >>lanter+MO2
◧◩◪◨
84. permo-+tc1[view] [source] [discussion] 2023-03-18 19:27:25
>>warent+2u
to be fair, it’s only one net downvote
◧◩◪◨
85. coldte+Qx1[view] [source] [discussion] 2023-03-18 22:09:12
>>wpietr+7G
On a network started by 2-3-10 people, the first new members would need to be vouched by a percentage of those to get in - and so on.

If someone down the line does some BS activity, the accounts that vouched for it have their reputation on the line.

A whole tree of the person who did the BS and 1-2 layers of vouching above gets put on check, gets big red warning label in their UI presence (e.g. under their avatar/name), and loses privileges. It could even just get immediately deleted.

And since I said "identity based", you would need to provide to real world id to get in, on top of others vouching for you. It can be made so you wouldn't be able to get a fake account any easier than you can get a fake passport.

replies(1): >>wpietr+q94
◧◩
86. notabe+Rz1[view] [source] [discussion] 2023-03-18 22:22:22
>>Alex39+rf
I'm constantly curious whether anyone working in the AI space is cognizant of the Tower of Babel myth.

I don't think an arms race for convincing looking bullshit is going to turn out well for our species.

◧◩◪◨⬒
87. trippi+QU1[view] [source] [discussion] 2023-03-19 01:24:40
>>mlhpdx+oL
I genuinely want to understand whether there’s a meaningful difference between non-hallucinatory and hallucinatory content generation other than “real world correctness”.
replies(1): >>mlhpdx+pH8
◧◩◪◨⬒⬓⬔
88. bombol+Or2[view] [source] [discussion] 2023-03-19 08:23:20
>>lifeis+0r
> something like 2 billion people have a phone with a secure enclave capable of this in their pockets today - and they use it everyday for logins, payment and paying at the car park.

They don't own a key pair. They carry one around, which is owned by google or some other entity?

◧◩◪◨⬒
89. Sai_+NH2[view] [source] [discussion] 2023-03-19 11:46:03
>>permo-+V51
They don’t have control over the output. They created something that creates something else. They can only tweak what they created, not whatever was created by what they created.

E.g., if I create a great paintbrush which creates amazing spatter designs on the wall when it is used just so, then, beyond a point, I have no way to control the spatter designs - I can only influence the designs to some extent.

replies(1): >>permo-+LQ8
◧◩◪◨⬒
90. lifeis+hJ2[view] [source] [discussion] 2023-03-19 12:02:42
>>wpietr+PI
I am guessing that "fixed identity information" is not a key pair ?
◧◩◪◨⬒
91. lanter+MO2[view] [source] [discussion] 2023-03-19 12:58:34
>>permo-+V51
This is true, but detecting and omitting code hallucinations is (functionally) as hard as just not hallucinating in the first place.
◧◩◪◨⬒
92. wpietr+q94[view] [source] [discussion] 2023-03-19 21:04:43
>>coldte+Qx1
Are you talking about in-person verification and vouching? Or can it be digitally mediated?

If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.

If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person.

replies(1): >>coldte+mg4
◧◩◪◨⬒⬓
93. coldte+mg4[view] [source] [discussion] 2023-03-19 21:50:21
>>wpietr+q94
>Are you talking about in-person verification and vouching? Or can it be digitally mediated?

Yes and yes.

>If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.

It's happened already in some cases, e.g.: https://en.wikipedia.org/wiki/Real-name_system

>If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person

How about a requirement to personally know the other person in what hackers in the past called "meatspace"?

Just brainstorming here, but for a cohesive forum, even of tens of thousands of people, it shouldn't be that difficult to achieve.

For something Facebook / Tweeter scale it would take "bulk verifiers" that are trusted, and where you need to register in person.

◧◩◪◨⬒⬓⬔
94. accoun+W46[view] [source] [discussion] 2023-03-20 13:15:37
>>lifeis+jI
It might get to be that way some day, but for now there is recourse. France is (in)famous for it and they are currently making use of that way.

And this is important because a "fair democratic society" that doesn't need people to be able to protest is, as history has shown many times, only a temporary affair. The best way to keep it is to not give the government the tools a worse government could use to suppress dissent.

◧◩◪◨⬒⬓
95. mlhpdx+pH8[view] [source] [discussion] 2023-03-21 00:51:23
>>trippi+QU1
I’m far from an expert but as I understand it the reference point isn’t so much the “real world” as it is the training data. If the model generates a strongly weighted association that isn’t in the data, and shouldn’t exist perhaps at all. I’d prefer a word like “superstition”, it seems more relatable.
◧◩◪◨⬒⬓
96. permo-+LQ8[view] [source] [discussion] 2023-03-21 02:06:44
>>Sai_+NH2
did you read what I said?
◧◩◪◨⬒
97. permo-+nef[view] [source] [discussion] 2023-03-22 19:47:04
>>bombca+p01
this isn't a good explanation. these LLMs are essentially statistical models. when they "hallucinate", they're not "imagining" or "dreaming", they're simply producing a string of results that your prompt combined with its training corpus implies to be likely
◧◩◪◨
98. permo-+Cef[view] [source] [discussion] 2023-03-22 19:48:00
>>aent+Eo
evidently, they can hard-code exceptions into it. this idea that it's entirely a black box that they have no control over is really strange and incorrect and feels to me like little more than contrarianism to my comment
[go to top]