zlacker

[parent] [thread] 117 comments
1. altdat+(OP)[view] [source] 2023-11-20 05:36:55
OpenAI has hundreds more employees, all of whom are incredibly smart. While they will definitely lose the leadership and talent of those two, it’s not as if a nuclear bomb dropped on their HQ and wiped out all their engineers!

So questioning whether they will survive seems very silly and incredibly premature to me

replies(12): >>jxf+v >>questi+Q >>alsodu+Y >>simult+p1 >>patapo+g2 >>karmas+k2 >>spacem+05 >>blast+65 >>thekev+c5 >>Dantes+4b >>pedros+Vc >>laurel+5D
2. jxf+v[view] [source] 2023-11-20 05:40:15
>>altdat+(OP)
But a number of those other employees have said they'll leave if Altman isn't rehired.
replies(1): >>zombiw+G1
3. questi+Q[view] [source] 2023-11-20 05:41:33
>>altdat+(OP)
The perception right now is that the board doesn't care about investors, this will kill this company that is burning money at an insane rate. Employees will run for the exits unless they are convinced that there is a future exit.
4. alsodu+Y[view] [source] 2023-11-20 05:42:18
>>altdat+(OP)
Pretty much every researcher I know at OpenAI who are on twitter re-tweeted Sam Atlman's heart tweet with their own heart or some other supportive message.

I'm sure that's a sign that they are all team Sam - this includes a ton of researchers you see on most papers that came out of OpenAI. That's a good chunk of their research team and that'd be a very big loss. Also there are tons of engineers (and I know a few of them) who joined OpenAI recently with pure financial incentives. They'll jump to Sam's new company cause of course that's where they'd make real money.

This coupled with investors like Microsoft backing off definitely makes it fair to question the survival of OpenAI in the form we see today.

And this is exactly what makes me question Adam D'Angelo's motives as a board member. Maybe he wanted OpenAI to slow down or stop existing, to keep his Poe by Quora (and their custom assistants) relevant. GPT Agents pretty much did what Poe was doing overnight, and you can have as many as them with your existing 20$ ChatGPT Plus subscription. But who knows I'm just speculating here like everyone else.

replies(8): >>morale+K1 >>154573+02 >>halduj+12 >>threes+33 >>alex_y+D5 >>behrin+ic >>babysh+kd >>zq+9e
5. simult+p1[view] [source] 2023-11-20 05:44:45
>>altdat+(OP)
With a PR damage such this one, if they survive it will be a miracle.
◧◩
6. zombiw+G1[view] [source] [discussion] 2023-11-20 05:46:23
>>jxf+v
Bullshit. They are not quitting
replies(4): >>154573+j2 >>Techni+l2 >>bartim+2d >>icy_de+ue
◧◩
7. morale+K1[view] [source] [discussion] 2023-11-20 05:46:48
>>alsodu+Y
Also, serious investors won't touch OpenAI with a ten foot pole after these events.

There's an idealistic bunch of people that think this was the best thing to happen to OpenAI, time will tell but I personally think this is the end of the company (and Ilya).

Satya must be quite pissed off and rightly so, he gave them big money, believed in them and got backstabbed as well; disregarding @sama, MS is their single largest investor and it didn't even warrant a courtesy phone call to let them know of all this fiasco (even thought some savants were saying they shouldn't have to, because they "only" owned 49% of the LLC. LMAO).

Next bit of news will be Microsoft pulling out of the deal but, unlike this board, Satya is not a manchild going through a crisis, so it will happen without it being a scandal. MS should probably just grow their own AI in-house at this point, they have all the resources in the world to do so. People who think that MS (a ~50 old company, with 200k employees, valued at almost 3 trillion) is now lost without OpenAI and the Ilya gang must have room temperature IQs.

replies(3): >>visarg+96 >>clover+c6 >>didibu+xe
◧◩
8. 154573+02[view] [source] [discussion] 2023-11-20 05:47:50
>>alsodu+Y
It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

I would imagine that if you based hiring and firing decisions on the metric of 'how often this employee tweets' you could quite effectively cut deadwood.

With that in mind...

replies(5): >>alsodu+w2 >>Offici+C2 >>karmas+83 >>kvathu+H3 >>dorkwo+C4
◧◩
9. halduj+12[view] [source] [discussion] 2023-11-20 05:48:00
>>alsodu+Y
> Pretty much every researcher I know at OpenAI who are on twitter

Selection bias?

replies(2): >>alsodu+L2 >>qwerto+W5
10. patapo+g2[view] [source] 2023-11-20 05:50:13
>>altdat+(OP)
I am guessing they are super reliant on Microsoft to keep running ChatGPT... If Microsoft decides to get out and finds a way they would be in deep trouble.
replies(1): >>sangno+J8
◧◩◪
11. 154573+j2[view] [source] [discussion] 2023-11-20 05:50:29
>>zombiw+G1
They're either not quitting or they've outed themselves as being part of a personality cult and they'll just hinder things if they're not ejected promptly.
12. karmas+k2[view] [source] 2023-11-20 05:50:33
>>altdat+(OP)
Survive as existing? They will.

But this is a disaster that can't be sugarcoated. Working in an AI company with a doomer as head is ridiculous. It will be like working in a tobacco company advocating for lung cancer awareness.

I don't think the new CEO can do anything to get back trust in record short amount of time. The sam loyalists will leave. The question remain, how is the new CEO going to hire new people, and will he be able to do so fast enough, and the ones who remain will accept the company that is a drastically different.

replies(2): >>bottle+93 >>peanut+N3
◧◩◪
13. Techni+l2[view] [source] [discussion] 2023-11-20 05:50:51
>>zombiw+G1
Even if you don’t believe many employees would consider leaving for Altman, I find it probable that many would consider leaving for financial reasons. What will their PPUs be worth if OpenAI is seen as a funding risk?
◧◩◪
14. alsodu+w2[view] [source] [discussion] 2023-11-20 05:51:31
>>154573+02
That's not the case with AI community. Twitter is heavily used by almost every professor/researcher/PhD student who is doing learning. Ilya has one. Heck even Jitendra Malik who's probably as old as my grand father joined twitter.
replies(1): >>halduj+se
◧◩◪
15. Offici+C2[view] [source] [discussion] 2023-11-20 05:51:57
>>154573+02
I have never used twitter but this strikes me as a strange take at best. Many of the most brilliant and passionate engineers I've had the pleasure to work with have been massive shitposters.
replies(1): >>154573+Z2
◧◩◪
16. alsodu+L2[view] [source] [discussion] 2023-11-20 05:52:52
>>halduj+12
Not if it's a big sample set. There's a guy on twitter who make a list with every OpenAI researcher he could find on twitter and almost all of them did react to Sams tweet in a supportive way.
replies(5): >>halduj+A4 >>154573+95 >>ethbr1+Y5 >>ignora+k6 >>djvdq+da
◧◩◪◨
17. 154573+Z2[view] [source] [discussion] 2023-11-20 05:54:03
>>Offici+C2
> massive shitposters

Yes, agreed, but on _twitter_?

The massive_disgruntled_engineer_rant does have a lot of precedent but I've never considered twitter to be their domain. Mailing lists, maybe.

replies(1): >>xcv123+si
◧◩
18. threes+33[view] [source] [discussion] 2023-11-20 05:54:13
>>alsodu+Y
Team Sam = Team Money.

If you're an employee at OpenAI there is a huge opportunity to leave and get in early with decent equity at potentially the next giant tech company.

Pretty sure everyone at OpenAI's HQ in San Francisco remembers how many overnight millionaires Facebook's IPO created.

replies(6): >>majika+35 >>bnralt+O5 >>j7ake+r8 >>zo1+R8 >>tempsy+Ha >>behrin+Ec
◧◩◪
19. karmas+83[view] [source] [discussion] 2023-11-20 05:54:32
>>154573+02
Discredit people using twitter is a weird take, and didn't resemble critical thinking to me.
replies(1): >>garden+aG
◧◩
20. bottle+93[view] [source] [discussion] 2023-11-20 05:54:32
>>karmas+k2
Ah yes you're either a doomer or e/acc. Pick an extreme. Everything must be polarized.
replies(1): >>astran+J9
◧◩◪
21. kvathu+H3[view] [source] [discussion] 2023-11-20 05:58:21
>>154573+02
Completely disagree: Yann LeCun, John Carmack, Rui Ueyama, Andrei Alexandrescu, Matt Goldbolt, Horace He, Tarun Chitra, George Hotz, etc.
◧◩
22. peanut+N3[view] [source] [discussion] 2023-11-20 05:58:51
>>karmas+k2
Surely the employees knew before joining that OpenAI is a non-profit aiming to develop safe AGI?
replies(2): >>alexga+U6 >>sgift+27
◧◩◪◨
23. halduj+A4[view] [source] [discussion] 2023-11-20 06:04:39
>>alsodu+L2
Large sample =/= (inherently) representative. What percentage of OpenAI researchers are on Twitter?

Follow-up: Why is only some fraction on Twitter?

This is almost certainly a confounder, as is often the case when discussing reactions on Twitter vs reactions in the population.

◧◩◪
24. dorkwo+C4[view] [source] [discussion] 2023-11-20 06:04:48
>>154573+02
> It's always been my observation that the actual heavyweights of any hardcore engineering project are the ones that avoid snarky lightweight platforms like twitter like the plague.

What other places are there to engage with the developer community?

replies(1): >>154573+66
25. spacem+05[view] [source] 2023-11-20 06:06:48
>>altdat+(OP)
If the funding dries up for OpenAI, those engineers have no incentive to keep working there. No point wasting your career on an organization that's destined to die.
◧◩◪
26. majika+35[view] [source] [discussion] 2023-11-20 06:06:56
>>threes+33
Money = building boring enterprise products, not building AI gods I would suspect
replies(1): >>threes+26
27. blast+65[view] [source] 2023-11-20 06:07:12
>>altdat+(OP)
> it’s not as if a nuclear bomb dropped on their HQ

Oh yes it is.

◧◩◪◨
28. 154573+95[view] [source] [discussion] 2023-11-20 06:07:33
>>alsodu+L2
> every OpenAI researcher he could find on twitter

Literally the literal definition of 'selection bias' dude, like, the pure unadulterated definition of it.

replies(1): >>alsodu+36
29. thekev+c5[view] [source] 2023-11-20 06:07:39
>>altdat+(OP)
Funny you should reference a nuclear bomb. This was 14 minutes after your post.

https://twitter.com/karpathy/status/1726478716166123851

◧◩
30. alex_y+D5[view] [source] [discussion] 2023-11-20 06:09:56
>>alsodu+Y
The heart tweet rebellion is about as meaningful as adding a hashtag supporting one side of your favorite conflict.

Come on. “By 5 pm everyone will quit if you don’t do x”. Response: tens of heart emojis.

replies(5): >>alsodu+E6 >>hipade+y8 >>happyt+39 >>london+V9 >>teaear+Da
◧◩◪
31. bnralt+O5[view] [source] [discussion] 2023-11-20 06:10:43
>>threes+33
There's a financial incentive. And there will be more opportunity for funding if you jump ship as well (it seems like OpenAI will have difficulty with investors after this).

But also, if you're a cutting edge researcher, do you want to stay at a company that just ousted the CEO because they thought the speed of technology was going too fast (it's sounded like this might be the reason)? You don't want to be shackled when by the organization becoming a new MIRI.

replies(1): >>hef198+cc
◧◩◪
32. qwerto+W5[view] [source] [discussion] 2023-11-20 06:11:15
>>halduj+12
Which would mean that he specifically selected who to follow due to their closeness to / alignment with Sam, pre-ousting? How would he do that?
replies(1): >>154573+ga
◧◩◪◨
33. ethbr1+Y5[view] [source] [discussion] 2023-11-20 06:11:35
>>alsodu+L2
How childish are employees to publicly get involved with this on Twitter?

If the CEO of my company got shitcanned and then he/she and the board were feuding?

... I'd talk to my colleagues and friends privately, and not go anywhere near the dumpster fire publicly. If I felt strongly, hell, turn in my resignation. But 100% "no comment" in public.

replies(3): >>dylan6+u9 >>154573+x9 >>djvdq+Ea
◧◩◪◨
34. threes+26[view] [source] [discussion] 2023-11-20 06:11:42
>>majika+35
OpenAI was building boring enterprise and developer products.

Which likely most of the company was working on.

replies(1): >>sangno+q8
◧◩◪◨⬒
35. alsodu+36[view] [source] [discussion] 2023-11-20 06:11:43
>>154573+95
Like I said, if the subset of OpenAI researchers who are on twitter is very small, sure.

But people in AI/learning community are very active on twitter. I don't know every AI researcher on OpenAIs payroll. But the fact that most active researchers (looking at the list of OpenAI paper authors, and tbh the people I know, as a researcher in this space) are on twitter.

replies(2): >>154573+n7 >>halduj+Tc
◧◩◪◨
36. 154573+66[view] [source] [discussion] 2023-11-20 06:11:55
>>dorkwo+C4
Engagement is not necessarily constructive engagement
replies(1): >>dorkwo+O9
◧◩◪
37. visarg+96[view] [source] [discussion] 2023-11-20 06:12:04
>>morale+K1
200k MS employees can't do what 500 from OAI can, the more you pile on the problem, the worse the outcome. The problem with Microsoft is that, like Google, Amazon and IBM, they are not a good medium for radical innovation, are old, ossified companies. Apple used to be nimble when Steve was alive, but went to coasting mode since then. Having large revenue from old business is an obstacle in the new world, maybe Apple was nimble because it had small market share.
replies(2): >>codebo+N8 >>hn_thr+jb
◧◩◪
38. clover+c6[view] [source] [discussion] 2023-11-20 06:12:25
>>morale+K1
My first question to this scenario would be: Could MS provide the seed funding for Sam's next gig? As in, they bet on OpenAI, and either OpenAI keeps on keeping on or Sam's gig steals the thunder, and they presumably have the cash to play a role in both.
replies(1): >>morale+A31
◧◩◪◨
39. ignora+k6[view] [source] [discussion] 2023-11-20 06:13:26
>>alsodu+L2
A majority of the early team that joined the non-profit OpenAI over BigTech did not do so for money but for its mission. Post-2019 hires may be more aligned with Sam but the early hires embody OpenAI's charter, Sustkever might argue.

Of course, OpenAI as a cloud-platform is DoA if Sam leaves, and that's a catastrophic business hit to take. It is a very bold decision. Whether it was a stupid one, time will tell.

◧◩◪
40. alsodu+E6[view] [source] [discussion] 2023-11-20 06:15:10
>>alex_y+D5
It wasn't a question of "will these people quit there jobs at OpenAI and get into the job market because they support Sam".

It was a question of whether they'd leave OpenAI and join a new company that Sam starts with billions in funding at comparable or higher comp. In that case, of course who the employees are siding with matters.

◧◩◪
41. alexga+U6[view] [source] [discussion] 2023-11-20 06:16:38
>>peanut+N3
OpenAI's recruiting pitch was 5-10+ million/year in the form of equity. The structure of the grants is super weird by traditional big-company standards, but it was plausible enough that you could squint and call it the same. I'd posit that many of the people jumping to OpenAI are doing it for the cash and not the mission.

https://the-decoder.com/openai-lures-googles-top-ai-research....

◧◩◪
42. sgift+27[view] [source] [discussion] 2023-11-20 06:17:11
>>peanut+N3
They thought so. Now, they know that instead they work for one aiming to satisfy the ego of a specific group of people - same as everywhere else.
◧◩◪◨⬒⬓
43. 154573+n7[view] [source] [discussion] 2023-11-20 06:19:50
>>alsodu+36
> But the fact that most active researchers ... are on twitter

On twitter != 'active on twitter'

There's a biiiiiig difference between being 'on twitter' and what I shall refer to kindly as terminally online behaviour aka 'very active on twitter.'

◧◩◪◨⬒
44. sangno+q8[view] [source] [discussion] 2023-11-20 06:26:43
>>threes+26
OpenAI was building boring enterprise and developer products under Sam Altman's leadership
replies(1): >>mirzap+eo
◧◩◪
45. j7ake+r8[view] [source] [discussion] 2023-11-20 06:26:50
>>threes+33
Salaries at openai already make them millionaires.
◧◩◪
46. hipade+y8[view] [source] [discussion] 2023-11-20 06:27:38
>>alex_y+D5
Anyone worth a shit will leave and go work with Sam. OpenAI will be left with a bunch of below average grifters.
replies(3): >>Gigabl+f9 >>austhr+T9 >>hef198+Eb
◧◩
47. sangno+J8[view] [source] [discussion] 2023-11-20 06:28:22
>>patapo+g2
I'm sure Google will throw a couple of billions their way, given the chance
replies(1): >>exitb+Fb
◧◩◪◨
48. codebo+N8[view] [source] [discussion] 2023-11-20 06:29:04
>>visarg+96
MS isn't starting from scratch, it already has the weights of the worlds most powerful LM, and it's all running on their datacenters. Even without Sam, they just need to keep the current momentum going. Maybe axe ChatGPT and focus solely on Bing/Copilot going forward. It would give me great satisfaction to see the laughing stock search engine of the past decade being the undisputed face of AI over the next.
◧◩◪
49. zo1+R8[view] [source] [discussion] 2023-11-20 06:29:38
>>threes+33
All this talk of a new venture and more money makes this smell highly fishy to me. Take this with a grain of salt, it's a random thought.

It's created huge noise and hype and controversy, and shaken things up to make people "think" they can be in on the next AI hype train "if only" they join whatever Sam Altman does now. Riding the next wave kind of thing because you have FOMO and didn't get in on the first wave.

◧◩◪
50. happyt+39[view] [source] [discussion] 2023-11-20 06:30:54
>>alex_y+D5
I take it you have never made a pledge to someone.

It’s a signal. The only meaning is the circumstances under which the signal is given: Sam made an ask. These were answers.

replies(2): >>alex_y+8a >>154573+Xa
◧◩◪◨
51. Gigabl+f9[view] [source] [discussion] 2023-11-20 06:32:26
>>hipade+y8
Only on HN: your worth is tied to your choice of CEO.
◧◩◪◨⬒
52. dylan6+u9[view] [source] [discussion] 2023-11-20 06:35:11
>>ethbr1+Y5
These are people very active on Twitter and work for a company that unashamedly harvested all of the data it could for free with out asking to make money. It's not like shame and self-respect are allowed anywhere near this company.
◧◩◪◨⬒
53. 154573+x9[view] [source] [discussion] 2023-11-20 06:35:25
>>ethbr1+Y5
tl;dr: Any OAI employee tweeting about this is unhinged.
◧◩◪
54. astran+J9[view] [source] [discussion] 2023-11-20 06:36:24
>>bottle+93
There's a character in HPMOR named after the new CEO.

(That's the religious text of the anti-AI cult that founded OpenAI. It's in the form of a very long Harry Potter fanfic.)

replies(3): >>whatsh+qa >>Feepin+1b >>tempus+lf
◧◩◪◨⬒
55. dorkwo+O9[view] [source] [discussion] 2023-11-20 06:36:47
>>154573+66
That's a strange thing to say. I find a lot of value in the developer community on Twitter. I wouldn't have my career without it.

I also wasn't being facetious. If there are other places to share work and ideas with developers online, I'd love to hear about them!

◧◩◪◨
56. austhr+T9[view] [source] [discussion] 2023-11-20 06:37:15
>>hipade+y8
In a dispute between people willing to sacrifice profit for values and those chasing the profit, why on earth would you put grifters on team values over profit?
replies(2): >>throwa+Pb >>bertil+rd
◧◩◪
57. london+V9[view] [source] [discussion] 2023-11-20 06:37:21
>>alex_y+D5
Sam hasn't yet lined up the funding, so therefore they can't yet offer decent jobs, so therefore the openai employees haven't left

But they will.

◧◩◪◨
58. alex_y+8a[view] [source] [discussion] 2023-11-20 06:39:04
>>happyt+39
This is how one answers if they actually intend to quit: https://x.com/gdb/status/1725667410387378559?s=46&t=Q5EXJgwO...

There’s nothing wrong with not following, it’s a brave and radical thing to do. A heart emoji tweet doesn’t mean much by itself.

replies(1): >>happyt+kc
◧◩◪◨
59. djvdq+da[view] [source] [discussion] 2023-11-20 06:39:20
>>alsodu+L2
They can support Sam, but still stay in the company.
◧◩◪◨
60. 154573+ga[view] [source] [discussion] 2023-11-20 06:39:27
>>qwerto+W5
Big question!
◧◩◪◨
61. whatsh+qa[view] [source] [discussion] 2023-11-20 06:40:33
>>astran+J9
Is Chat GPT writing this whole dialogue?
◧◩◪
62. teaear+Da[view] [source] [discussion] 2023-11-20 06:42:02
>>alex_y+D5
Talk is easy. But also the good employees will be paid well to get poached.
◧◩◪◨⬒
63. djvdq+Ea[view] [source] [discussion] 2023-11-20 06:42:09
>>ethbr1+Y5
> You should find a better place to work.

Work is work. If you start being emotional about it, it's a bad, not good, thing.

replies(1): >>154573+Cc
◧◩◪
64. tempsy+Ha[view] [source] [discussion] 2023-11-20 06:42:25
>>threes+33
being a lowly millionaire doesn’t get you much these days. almost certainly anyone who was hired into a mid level or senior role was probably already at least a millionaire
◧◩◪◨
65. 154573+Xa[view] [source] [discussion] 2023-11-20 06:44:06
>>happyt+39
So is this a company or something else that starts with a c? (Thinking of a 4 letter word.)
◧◩◪◨
66. Feepin+1b[view] [source] [discussion] 2023-11-20 06:44:21
>>astran+J9
Sorry, which character are you talking about? (Also lol "religious text", how dare people have didactic opinions.)
replies(1): >>astran+Rb
67. Dantes+4b[view] [source] 2023-11-20 06:44:31
>>altdat+(OP)
Andrej Karpathy literally just tweeted the nuclear radiation emoji lol.
◧◩◪◨
68. hn_thr+jb[view] [source] [discussion] 2023-11-20 06:45:56
>>visarg+96
> Apple used to be nimble when Steve was alive, but went to coasting mode since then

Give me a break. Apple Watch and Air pods are far and away leaders in their category, Apple's silicon is a huge leap forward, there is innovation in displays, CarPlay is the standard auto interface for millions of people, while I may question the utility the Vision Pro is a technological marvel, iPhone is still a juggernaut (and the only one of these examples that predate Jobs' passing), etc. etc.

Other companies dream about "coasting" as successfully.

replies(1): >>Freedo+xc
◧◩◪◨
69. hef198+Eb[view] [source] [discussion] 2023-11-20 06:48:08
>>hipade+y8
What is it with all this personality cult about founders, CEOs and CTOs nowadays? I thpught the cult around Steve Jobs was, bad it pales in comparison to today.

As soon as one person becomes more important than the team, as in the team starts to be structured around said person instead of with the person, that person should be replaced. Because otherwise, the team will not be functioning properly without the "star player" nor is the team more the sum of its members anymore...

replies(2): >>Closi+Lc >>OscarT+zd
◧◩◪
70. exitb+Fb[view] [source] [discussion] 2023-11-20 06:48:17
>>sangno+J8
Why though? Companies invest to see profit or get products they can sell. This is not only about the CEO. The CEO change signals a radical strategic shift.
◧◩◪◨⬒
71. throwa+Pb[view] [source] [discussion] 2023-11-20 06:49:53
>>austhr+T9
Welcome to hn. Here it's all about money
◧◩◪◨⬒
72. astran+Rb[view] [source] [discussion] 2023-11-20 06:50:07
>>Feepin+1b
The one with the same name as the new CEO. Pretty straightforward.

> Also lol "religious text", how dare people have didactic opinions.

That's not what a religious text is, that'd just be a blog post. It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

replies(1): >>Feepin+4d
◧◩◪◨
73. hef198+cc[view] [source] [discussion] 2023-11-20 06:52:12
>>bnralt+O5
It seems that MS spent 10 billion to become a minority shareholder in company controlled by a non-profit. They were warned, or maybe even Sam oversold the potential profitability of the investment.

Just as another perspective.

◧◩
74. behrin+ic[view] [source] [discussion] 2023-11-20 06:53:17
>>alsodu+Y
Why a researcher would concern him or herself with management politics is beyond me? Particularly with a glorified sales man. Sounds like they aren't spending enough time actually working.
replies(4): >>bertil+Nd >>alsodu+3f >>vvrm+7g >>wyager+Qq1
◧◩◪◨⬒
75. happyt+kc[view] [source] [discussion] 2023-11-20 06:53:26
>>alex_y+8a
Did I say there was something wrong with either case? No. I said it was a signal. And it certainly can mean a lot by itself.

You can disagree. You can say only explicit non-emoji messages matter. That’s ok. We can agree to disagree.

◧◩◪◨⬒
76. Freedo+xc[view] [source] [discussion] 2023-11-20 06:55:08
>>hn_thr+jb
> Apple Watch and Air pods are far and away leaders in their category,

By what metric? I prefer open hardware and modifiable software - these products are in no way leaders for me. Not to mention all the bluetooth issues my family and friends have had when trying to use them.

◧◩◪◨⬒⬓
77. 154573+Cc[view] [source] [discussion] 2023-11-20 06:55:15
>>djvdq+Ea
Nah, it's fine to be passionate about your work and relationships with your colleagues.

You just need to temper that before you start swearing oaths of fealty on twitter; because that's giving real Jim Jones vibes which isn't a good thing.

◧◩◪
78. behrin+Ec[view] [source] [discussion] 2023-11-20 06:55:25
>>threes+33
If you're looking for money you probably chose wrong going with a non-profit.
◧◩◪◨⬒
79. Closi+Lc[view] [source] [discussion] 2023-11-20 06:55:59
>>hef198+Eb
While your post sounds like something that would be true, there are loads of examples of where companies have thrived under a clear vision from a specific person.

The example of Steve Jobs used in the above post is probably a prime example - Apple just wouldn’t be the company it is today without that period of his singular vision and drive.

Of course they struggled after losing him, but the current version of Apple that has lived with Jobs and lost him is probably better than the hypothetical version of Apple where he never returned.

Great teams are important, but great teams plus great leadership is better.

replies(2): >>_facto+wk >>hef198+1o
◧◩◪◨⬒⬓
80. halduj+Tc[view] [source] [discussion] 2023-11-20 06:57:14
>>alsodu+36
It seems like you're misunderstanding selection bias.

It doesn't matter if it's large, unless the "very active on twitter" group is large enough to be the majority.

The point is that there may be (arguably very likely) a trait AI researchers active on Twitter have in common which differentiates them from the population therefore introducing bias.

It could be that the 30% (made up) of OpenAI researchers who are active on Twitter are startup/business/financially oriented and therefore align with Sam Altman. This doesn't say as much about the other 70% as you think.

replies(1): >>154573+0g
81. pedros+Vc[view] [source] 2023-11-20 06:57:21
>>altdat+(OP)
The GPT-4 pre-training research lead quit on Friday.
◧◩◪
82. bartim+2d[view] [source] [discussion] 2023-11-20 06:58:20
>>zombiw+G1
Maybe not instantly. But there's a version where they don't agree with certain decisions and will now be more open to other opportunities.
◧◩◪◨⬒⬓
83. Feepin+4d[view] [source] [discussion] 2023-11-20 06:58:36
>>astran+Rb
Oh hey there he is, cool. I had a typo in my search, I think.

> That's not what a religious text is, that'd just be a blog post.

Yes, almost as if "Lesswrong is a community blog dedicated to refining the art of human rationality."

> It's the part where reading it causes you to join a cult group house polycule and donate all your money to stopping computers from becoming alive.

I don't think anybody either asked somebody to, or actually did, donate all their money. As to "joining a cult group house polycule", to my knowledge that's just SF. There's certainly nothing in the Sequences about how you have to join a cult group house polycule. To be honest, I consider all the people who joined cult group house polycules, whose existence I don't deny, to have a preexisting cult group house polycule situational condition. (Living in San Francisco, that is.)

replies(2): >>avalys+ug >>astran+hj
◧◩
84. babysh+kd[view] [source] [discussion] 2023-11-20 07:00:20
>>alsodu+Y
Presumably there is some IP assignment agreement that would make it tricky for Sam to start an OpenAI competitor without a lot of legal exposure?
◧◩◪◨⬒
85. bertil+rd[view] [source] [discussion] 2023-11-20 07:01:29
>>austhr+T9
I'm assuming the original comment meant that the grifters would not be extended a new offer after their colleagues learned that they were not as good as their CV said at open AI.
◧◩◪◨⬒
86. OscarT+zd[view] [source] [discussion] 2023-11-20 07:01:58
>>hef198+Eb
People love to pick sides then retroactively rationalise that decision. None of us reading about it have the facts required to make a rational judgement. So it's Johnny vs Amber time.
◧◩◪
87. bertil+Nd[view] [source] [discussion] 2023-11-20 07:03:16
>>behrin+ic
My experience of academic research is that there's a lot of energy spent on laboratory politics.
◧◩
88. zq+9e[view] [source] [discussion] 2023-11-20 07:05:42
>>alsodu+Y
The two most important to OpenAI's mission - Alec Radford and Ilya Sutskever - did not respond with a heart.
◧◩◪◨
89. halduj+se[view] [source] [discussion] 2023-11-20 07:07:32
>>alsodu+w2
Mostly for professional purposes such as networking and promoting academic activities. Sometimes for their side startups.

I rarely see a professor or PhD student voicing a political viewpoint (which is what the Sam Altman vs Ilya Sutskever debate is) on their Twitter.

◧◩◪
90. icy_de+ue[view] [source] [discussion] 2023-11-20 07:08:06
>>zombiw+G1
You're right. They're fired.
◧◩◪
91. didibu+xe[view] [source] [discussion] 2023-11-20 07:08:38
>>morale+K1
But OpenAI is a non for profit that was exploring a goal that it saw financial incentives as misaligned.

It's what kind of got it achieved. Because every other company didn't really see the benefit of going straight to AGI, instead working on incremental addition and small iteration.

I don't know why the board decided to do what it did, but maybe it sees that OpenAI was moving away from R&D and too much into operations and selling a product.

So my point is that, OpenAI started as a charity and literally was setup in a way to protect that model, by having the for-profit arm be governed by the non-for-profit wing.

The funny thing is, Sam Altman himself was part of the people who wanted it that way, along with Elon Musk, Illya and others.

And I kind of agree, what kind of future is there here? OoenAI becomes another billion dollar startup that what? Eventually sells out with a big exit?

It's possible to see the whole venture as taking away from the goal set out by the non for profit.

◧◩◪
92. alsodu+3f[view] [source] [discussion] 2023-11-20 07:11:45
>>behrin+ic
It's not just management politics - it's about money and what they want to work on.

A lot of researchers like to work on cutting edge stuff, that actually ends up in a product. Part of the reason why so many researchers moved from Google to OpenAI was to be able to work on products that get into production.

> Particularly with a glorified sales man > Sounds like they aren't spending enough time actually working. Lmao I love how people come down to personal attacks on people.

◧◩◪◨
93. tempus+lf[view] [source] [discussion] 2023-11-20 07:14:17
>>astran+J9
Imagine how bad a reputation EA would have if the general public knew about HPMOR
replies(1): >>xvecto+4x
◧◩◪◨⬒⬓⬔
94. 154573+0g[view] [source] [discussion] 2023-11-20 07:17:08
>>halduj+Tc
You reckon 30% (made up) of staff having a personal 'alignment' with (or, put another way, 'having sworn an oath of fealty to') a CEO is something investors would like?

Seems like a bit of a commercial risk there if the CEO can 'make' a third of the company down tools.

replies(1): >>halduj+Ok
◧◩◪
95. vvrm+7g[view] [source] [discussion] 2023-11-20 07:18:03
>>behrin+ic
Because a salesman’s skills complements those of a researcher. Salesman sells what the researcher built and brings in money to keep the lights on. Researcher gets to do what they love without having to worry about the real world. That’s a much sweeter deal than a micromanaging PI.
◧◩◪◨⬒⬓⬔
96. avalys+ug[view] [source] [discussion] 2023-11-20 07:20:18
>>Feepin+4d
“The Sequences”? Yes, this doesn’t sound like a quasi-religious cult at all…
replies(2): >>astran+5j >>Feepin+mj
◧◩◪◨⬒
97. xcv123+si[view] [source] [discussion] 2023-11-20 07:33:27
>>154573+Z2
Yes, on Twitter. Mailing lists are old boomer shit.
replies(1): >>154573+dE
◧◩◪◨⬒⬓⬔⧯
98. astran+5j[view] [source] [discussion] 2023-11-20 07:38:31
>>avalys+ug
The message is that if you do math in your head in a specific way involving Bayes' theorem, it will make you always right about everything. So it's not even quasi-religious, the good deity is probability theory and the bad one is evil computer gods.

This then causes young men to decide they should be in open relationships because it's "more logical", and then decide they need to spend their life fighting evil computer gods because the Bayes' theorem thing is weak to an attack called "Pascal's mugging" where you tell them an infinitely bad thing has a finite chance of happening if they don't stop it.

Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

https://metarationality.com/bayesianism-updating

Bit old but still relevant.

replies(1): >>Feepin+Sj
◧◩◪◨⬒⬓⬔
99. astran+hj[view] [source] [discussion] 2023-11-20 07:39:36
>>Feepin+4d
Well, Berkeley isn't exactly San Francisco, but joining cults is all those people get up to there. Some are Buddhist, some are Leverage, some are Lesswrong.

The most recent case was notably in the Bahamas though.

◧◩◪◨⬒⬓⬔⧯
100. Feepin+mj[view] [source] [discussion] 2023-11-20 07:40:16
>>avalys+ug
As far as I can tell, any single noun that's capitalized sounds religious. I blame the Bible. However, in this case it's just a short-hand for the sequences of topically related blog posts written by Eliezer between 2006 and 2009, which are written to fit together as one interconnected work. (https://www.lesswrong.com/tag/sequences , https://www.readthesequences.com/)
◧◩◪◨⬒⬓⬔⧯▣
101. Feepin+Sj[view] [source] [discussion] 2023-11-20 07:43:42
>>astran+5j
> This then causes young men to decide they should be in open relationships because it's "more logical"

Yes, which is 100% because of "LessWrong" and 0% because groups of young nerds do that every time, so much so that there's actually an XKCD about it (https://xkcd.com/592/).

The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place. LessWrong does not mandate, nor would that be a good idea, that you manually calculate these updates: humans are very bad at it.

> Also they invent effective altruism, which works until the math tells them it's ethical to steal a bunch of investor money as long as you use it on charity.

Given that this didn't happen with anyone else, and most other EAs will tell you that it's morally correct to uphold the law, and in any case nearly all EAs will act like it's morally correct, I'm inclined to think this was an SBF thing, not an EA thing. Every belief system will have antisocial adherents.

replies(1): >>astran+wl
◧◩◪◨⬒⬓
102. _facto+wk[view] [source] [discussion] 2023-11-20 07:49:02
>>Closi+Lc
Newsflash. Altman is no Steve Jobs.
replies(1): >>Closi+qn9
◧◩◪◨⬒⬓⬔⧯
103. halduj+Ok[view] [source] [discussion] 2023-11-20 07:50:22
>>154573+0g
I randomly chose 30% to represent a seemingly large non majority sample which may not be representative of the underlying population.

I have no idea what the actual proportion is, nor how investors feel about this right now.

The true proportion of researchers who actively voice their political positions on twitter is probably much smaller and almost certainly a biased sample.

◧◩◪◨⬒⬓⬔⧯▣▦
104. astran+wl[view] [source] [discussion] 2023-11-20 07:53:33
>>Feepin+Sj
> The actual message regarding Bayes' Theorem is that there is a correct way to respond to evidence in the first place.

No, there isn't a correct way to do anything in the real world, only in logic problems.

This would be well known if anyone had read philosophy; it's the failed program of logical positivism. (Also the failed 70s-ish AI programs of GOFAI.)

The main reason it doesn't work is that you don't know what all the counterfactuals are, so you'll miss one. Aka what Rumsfeld once called "unknown unknowns".

https://metarationality.com/probabilism

> Given that this didn't happen with anyone else

They're instead buying castles, deciding scientific racism is real (though still buying mosquito nets for the people they're racist about), and getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

And of course, they think evil computer gods are going to kill them.

replies(1): >>Feepin+0n
◧◩◪◨⬒⬓⬔⧯▣▦▧
105. Feepin+0n[view] [source] [discussion] 2023-11-20 08:02:01
>>astran+wl
> No, there isn't a correct way to do anything in the real world, only in logic problems.

Agree to disagree? If there's one thing physics teaches us, it's that the real world is just math. I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were. Re counterfactuals, yes, the problem is uncomputable at the limit. That's not "unknown unknowns", that's just the problem of induction. However, it's not like there's any alternative system of knowledge that can do better. The point isn't to be right all the time, the point is to make optimal use of available evidence.

> buying castles

They make the case that the castle was good value for money, and given the insane overhead for renting meeting spaces, I'm inclined to believe them.

> scientific racism is real (though still buying mosquito nets for the people they're racist about)

Honestly, give me scientific racists who buy mosquito nets over antiracists who don't any day.

> getting tripped up reinventing Jainism when they realize drinking water causes infinite harm to microscopic shrimp.

As far as I can tell, that's one guy.

> And of course, they think evil computer gods are going to kill them.

I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

replies(1): >>astran+bK
◧◩◪◨⬒⬓
106. hef198+1o[view] [source] [discussion] 2023-11-20 08:06:32
>>Closi+Lc
Steve Jobs is actually a great example: He was, sucessfully at each time, replaced twice, once aftwr he almost ran Apple into the ground and then after his death. In fact, he shoes how to build an org that explicitly does not depend on war star player.
◧◩◪◨⬒⬓
107. mirzap+eo[view] [source] [discussion] 2023-11-20 08:07:16
>>sangno+q8
And that could be a core problem. He wasn't really free to decide the speed of development. He wanted to change that and deliver faster. Obviously, they achieved something in the past weeks, so doomers pulled the plug to stop him.
◧◩◪◨⬒
108. xvecto+4x[view] [source] [discussion] 2023-11-20 08:44:17
>>tempus+lf
Even HP fanfiction lovers HATED HPMOR. It had a clowny reputation

It is wild to see how closely connected the web is though. Yudkowsky, Shear, and Sutskever. The EA movement today controls a staggering amount of power.

replies(1): >>astran+wG
109. laurel+5D[view] [source] 2023-11-20 09:18:29
>>altdat+(OP)
> and talent of those two

You are aware that more than just 2 people departed?

◧◩◪◨⬒⬓
110. 154573+dE[view] [source] [discussion] 2023-11-20 09:25:04
>>xcv123+si
That's funny
◧◩◪◨
111. garden+aG[view] [source] [discussion] 2023-11-20 09:36:23
>>karmas+83
Since Twitter has been so controversial I don't think it's strange to discredit people using it. The people still using it are just addicted to attention.
replies(1): >>154573+oF3
◧◩◪◨⬒⬓
112. astran+wG[view] [source] [discussion] 2023-11-20 09:38:42
>>xvecto+4x
Here's the new CEO expressing the common EA belief that (theoretical world ending) AI is worse than the Nazis, because once you show them a thought experiment that might possibly true they're completely incapable of not believing in it.

https://x.com/eshear/status/1664375903223427072?s=46

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
113. astran+bK[view] [source] [discussion] 2023-11-20 10:00:37
>>Feepin+0n
> I mean, re GOFAI, it's not like Transformers and DL are any less "logic problem" than Eurisko or Eliza were.

Hmm, they're not a complete anything but they're pretty different as they're not discrete. That's how we can teach them undefinable things like writing styles. It seems like a good ingredient.

Personally I don't think you can create anything that's humanlike without being embodied in the world, which is mostly there to keep you honest and prevent you from mixing up your models (whatever they're made of) with reality. So that really limits how much "better" you can be.

> That's not "unknown unknowns", that's just the problem of induction.

This is the exact argument the page I linked discusses. (Or at least the whole book is.)

> However, it's not like there's any alternative system of knowledge that can do better.

So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it. (A religion meaning a principle you orient your life around that gives it unrealistically excessive meaning, aka the opposite of nihilism.)

> I mean, I do think that, yes. Got any argument against it other than "lol sci-fi"?

That's a great argument. The book I linked calls it "reasonableness". It's not a rational one though, so it's hard to use.

Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Main "logical" issue with it though is that it seems to ignore that things cost money, like where the evil AI is going to get the compute credits/GPUs/power bills to run itself.

But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.

replies(1): >>Feepin+fR
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
114. Feepin+fR[view] [source] [discussion] 2023-11-20 10:51:38
>>astran+bK
Iunno, quantized networks are pretty discrete. It seems a lot of the continuity only really has value during training. (If that!)

> So's this. It's true; no system of rationalism can be correct because the real world isn't discrete, and none are better than this one, but also this one isn't correct. So you should not start a religion based on it.

I mean, nobody's actually done this. Honestly I hear more about Bayes' Theorem from rationality critics than rationalists. Do some people take it too far? Sure.

But also

> the real world isn't discrete

That's a strange objection. Our data channels are certainly discrete: a photon either hits your retina or it doesn't. Neurons firing or not is pretty discrete, physics is maybe discrete... I'd say reality being continuous is as much speculation as it being discrete is. At any rate, the problem of induction arises just as much in a discrete system as in a continuous one.

> Example: if someone comes to you and tries to make you believe in Russell's teapot, you should ignore them even though they might be right.

Sure, but you should do that because you have no evidence for Russell's Teapot. The history of human evolution and current AI revolution are at least evidence for the possibility of superhuman intelligence.

"A teapot in orbit around Jupiter? Don't be ridiculous!" is maybe the worst possible argument against Russell's Teapot. There are strong reasons why there cannot be a teapot there, and this argument touches upon none of them.

If somebody comes to you with an argument that the British have started a secret space mission to Jupiter, and being British they'd probably taken a teapot along, then you will need to employ different arguments than if somebody asserted that the teapot just arose in orbit spontaneously. The catch-all argument about ridiculousness no longer works the same way. And hey, maybe you discover that the British did have a secret space program and a Jupiter cult in government. Proposing a logical argument creates points at which interacting with reality may change your mind. Scoffing and referring to science fiction gives you no such avenue.

> But a reasonable real world analog would be industrial equipment, which definitely can kill you but we more or less have under control. Or cars, which we don't really have under control and just ignore it when they kill people because we like them so much, but they don't self-replicate and do run out of gas. Or human babies, which are self-replicating intelligences that can't be aligned but so far don't end the world.

The thing is that reality really has no obligation to limit itself to what you consider reasonable threats. Was the asteroid that killed the dinosaurs a reasonable threat? It would have had zero precedents in their experience. Our notion of reasonableness is a heuristic built from experience, it's not a law. There's a famous term, "black swan", about failures of heuristics. But black swans are not "unknown unknowns"! No biologist would ever have said that black swans were impossible, even if they'd never seen nor heard of one. The problem of induction is not an excuse to give up on making predictions. If you know how animals work, the idea of a black swan is hardly out of context, and finding a black swan in the wild does not pose a problem for the field of biology. It is only common sense that is embarrassed by exceptions.

◧◩◪◨
115. morale+A31[view] [source] [discussion] 2023-11-20 12:15:25
>>clover+c6
Surprise surprise!

https://x.com/satyanadella/status/1726509045803336122?s=46

◧◩◪
116. wyager+Qq1[view] [source] [discussion] 2023-11-20 14:08:51
>>behrin+ic
Given that the board coup was orchestrated by AI safetyists, it likely has a pretty direct bearing on life as a researcher. What are you allowed to work on? What procedures and red tape are in place? Etc.
◧◩◪◨⬒
117. 154573+oF3[view] [source] [discussion] 2023-11-21 00:13:08
>>garden+aG
Yup. 'Tweeter' is a personality type.
◧◩◪◨⬒⬓⬔
118. Closi+qn9[view] [source] [discussion] 2023-11-22 13:34:22
>>_facto+wk
Newsflash. I didn't claim he was.
[go to top]