I'd say yes, Sutsksever is... naive? though very smart. Or just utopian. Seems he couldn't get the scale he needed/wanted out of a university (or Google) research lab. But the former at least would have bounded things better in the way he would have preferred, from an ethics POV.
Jumping into bed with Musk and Altman and hoping for ethical non-profit "betterment of humanity" behaviour is laughable. Getting access to capital was obviously tempting, but ...
As for Altman. No, he's not naive. Amoral, and likely proud of it. JFC ... Worldcoin... I can't even...
I don't want either of these people in charge of the future, frankly.
It does point to the general lack of funding for R&D of this type of thing. Or it's still too early to be doing this kind of thing at scale. I dunno.
Bleak.
Microsoft in particular laid off 10,000 and then immediately turned around and invested billions more in OpenAI: https://www.sdxcentral.com/articles/news/microsoft-bets-bill... -- last fall, just as the timeline laid out in the Atlnatic article was firing up.
In that context this timeline is even more nauseating. Not only did OpenAI push ChatGPT at the expense of their own mission and their employee's well-being, they likely caused massive harm to our employment sector and the well-being of tens of thousands of software engineers in the industry at large.
Maybe those layoffs would have happened anyways, but the way this all has rolled out and the way it's played out in the press and in the board rooms of the BigTech corporations... OpenAI is literally accomplishing the opposite of its supposed mission. And now it's about to get worse.
The 2003 Northeast blackout that affected 50 million people was partially caused by a race condition. https://www.theregister.com/2004/04/08/blackout_bug_report/
Brockman had a robot as ringbearer for his wedding. And instead of asking how your colleagues are doing, they would have asked “What is your life a function of?”. This was 2020.
https://www.theatlantic.com/technology/archive/2023/11/sam-a...
The messy, secretive reality behind OpenAI’s bid to save the world
https://www.technologyreview.com/2020/02/17/844721/ai-openai...
The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.
Only 4 comments at the time: >>22351341
More comments on Reddit: https://old.reddit.com/r/MachineLearning/comments/f5immz/d_t...
The series (basically everything in the https://en.wikipedia.org/wiki/Eight_Worlds) is pretty dated but Varley definitely managed to include some ahead-of-his-time ideas. I really liked Ophiuchi Hotline and Equinoctial
* Ilya Sutskever is concerned about the company moving too fast (without taking safety into account) under Sam Altman.
* The others on the board that ended up supporting the firing are concerned about the same.
* Ilya supports the firing because he wants the company to move slower.
* The majority of the people working on AI don't want to slow down, either because they want to develop as fast as possible or because they're worried about missing out on profit.
* Sam rallies the "move fast" faction and says "this board will slow us down horribly, let's move fast under Microsoft"
* Ilya realizes that the practical outcome will be more speed/less safety, not more safety, as he hoped, leading to the regret tweet (https://nitter.net/ilyasut/status/1726590052392956028)