zlacker

Inside The Chaos at OpenAI

submitted by maxuti+(OP) on 2023-11-20 02:23:28 | 281 points 137 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. m15i+t1[view] [source] 2023-11-20 02:32:52
>>maxuti+(OP)
https://archive.md/PvbqA
16. PieUse+84[view] [source] 2023-11-20 02:53:31
>>tangju+l6
This works better: https://12ft.io/https://www.theatlantic.com/technology/archi...
◧◩
31. cmrdpo+h5[view] [source] [discussion] 2023-11-20 03:00:33
>>jrm4+93
Yeah: https://www.youtube.com/watch?v=j_JW9NwjLF8

I'd say yes, Sutsksever is... naive? though very smart. Or just utopian. Seems he couldn't get the scale he needed/wanted out of a university (or Google) research lab. But the former at least would have bounded things better in the way he would have preferred, from an ethics POV.

Jumping into bed with Musk and Altman and hoping for ethical non-profit "betterment of humanity" behaviour is laughable. Getting access to capital was obviously tempting, but ...

As for Altman. No, he's not naive. Amoral, and likely proud of it. JFC ... Worldcoin... I can't even...

I don't want either of these people in charge of the future, frankly.

It does point to the general lack of funding for R&D of this type of thing. Or it's still too early to be doing this kind of thing at scale. I dunno.

Bleak.

42. cmrdpo+56[view] [source] 2023-11-20 03:06:39
>>maxuti+(OP)
Can't find the article right now, but there was one circulating that heavily implied that various SV execs began their rounds of layoffs last fall at least partially or probably inspired by the demos they'd seen of OpenAI's tech.

Microsoft in particular laid off 10,000 and then immediately turned around and invested billions more in OpenAI: https://www.sdxcentral.com/articles/news/microsoft-bets-bill... -- last fall, just as the timeline laid out in the Atlnatic article was firing up.

In that context this timeline is even more nauseating. Not only did OpenAI push ChatGPT at the expense of their own mission and their employee's well-being, they likely caused massive harm to our employment sector and the well-being of tens of thousands of software engineers in the industry at large.

Maybe those layoffs would have happened anyways, but the way this all has rolled out and the way it's played out in the press and in the board rooms of the BigTech corporations... OpenAI is literally accomplishing the opposite of its supposed mission. And now it's about to get worse.

44. tangju+l6[view] [source] 2023-11-20 03:08:51
>>maxuti+(OP)
https://archive.is/Vqjpr
◧◩◪◨⬒
63. bobthe+Q7[view] [source] [discussion] 2023-11-20 03:23:12
>>space_+A6
If anything it’ll be a subtle bug that wipes us out.

The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/

◧◩
83. leobg+U11[view] [source] [discussion] 2023-11-20 09:27:59
>>rmorey+l3
Well, I guess OpenAI always had a special kind of humor.

Brockman had a robot as ringbearer for his wedding. And instead of asking how your colleagues are doing, they would have asked “What is your life a function of?”. This was 2020.

https://www.theatlantic.com/technology/archive/2023/11/sam-a...

◧◩
98. chubot+5f4[view] [source] [discussion] 2023-11-21 01:42:07
>>simonw+o7
This is also a good 2020 article on OpenAI, by the same author Karen Hao:

The messy, secretive reality behind OpenAI’s bid to save the world

https://www.technologyreview.com/2020/02/17/844721/ai-openai...

The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.

Only 4 comments at the time: >>22351341

More comments on Reddit: https://old.reddit.com/r/MachineLearning/comments/f5immz/d_t...

◧◩◪◨⬒
104. dekhn+Ei4[view] [source] [discussion] 2023-11-21 02:02:32
>>tkgall+Vg
John Varley was inspired by Heinlein and ended up writing a whole collection of books about a post-earth solar system where every planet had a planet-wide intelligence (among other Heinlein-inspired ideas).

The series (basically everything in the https://en.wikipedia.org/wiki/Eight_Worlds) is pretty dated but Varley definitely managed to include some ahead-of-his-time ideas. I really liked Ophiuchi Hotline and Equinoctial

◧◩
117. breck+Qt4[view] [source] [discussion] 2023-11-21 03:17:06
>>leobg+B31
Interesting

https://www.youtube.com/watch?v=13CZPWmke6A&t=5206s

118. tgsovl+Lu4[view] [source] 2023-11-21 03:22:32
>>maxuti+(OP)
Looking at this article, the following theory would align with what I've seen so far:

* Ilya Sutskever is concerned about the company moving too fast (without taking safety into account) under Sam Altman.

* The others on the board that ended up supporting the firing are concerned about the same.

* Ilya supports the firing because he wants the company to move slower.

* The majority of the people working on AI don't want to slow down, either because they want to develop as fast as possible or because they're worried about missing out on profit.

* Sam rallies the "move fast" faction and says "this board will slow us down horribly, let's move fast under Microsoft"

* Ilya realizes that the practical outcome will be more speed/less safety, not more safety, as he hoped, leading to the regret tweet (https://nitter.net/ilyasut/status/1726590052392956028)

[go to top]