Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:
‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’
Scoop: theinformation.com
And often like an individual contributor: "the feeling when you finally localize a bug to a small section of code, and know it's only a matter of time till you've squashed it"
https://twitter.com/gdb/status/1725373059740082475
"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."
https://time.com/collection/time100-ai/6309033/greg-brockman...
https://www.youtube.com/watch?v=Ft0gTO2K85A
No clear clues about today’s drama, at least as far as I could tell, but still an interesting listen.
(pleb who would invest [1], no other association)
[1] >>35306929
Ilya is a co-founder of OpenAI, the Chief Scientist, and one of the best known AI researchers in the field. He has also been touring with Sam Altman at public events, and getting highlights such as this one recently:
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
---
The entire Reddit thread is full of interesting posts from this apparently legitimate pseudonymous OpenAI insider talking candidly.
https://arxiv.org/abs/2308.03762
If it was really AGI, there won't even be ambiguity and room for comments like mine.
https://chat.openai.com/share/986f55d2-8a46-4b16-974f-840cb0...
> „im not at liberty to say, but im very close. i dont want to give to many details.“
As you say, Altman has been on a world tour, but he's effectively paying lip service to the need for safety when the primary outcome of his tour has been to cozy up to powerful actors, and push not just product, but further investment and future profit.
I don't think Sutskever was primarily motivated by AI safety in this decision, as he says this "was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." [1]
To me this indicates that Sutskever felt that Sam's strategy was opposed to original the mission of the nonprofit, and likely to benefit powerful actors rather than all of humanity.
1. https://twitter.com/GaryMarcus/status/1725707548106580255
When did Microsoft’s stock price tank?
https://x.com/maggienyt/status/1578074773174771712?s=46&t=k_...
I know it's convenient to dUnK on journalism these days but this is Kara Fucking Swisher. Her entire reputation is on the line if she gets these little things wrong. And she has a hell of a reputation
https://twitter.com/karaswisher/status/1725678074333635028?t...
Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...
Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...
Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.
For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
Full details: https://x.com/KordanOu/status/1725736058233749559?s=20
Bloomberg: "OpenAI CEO’s Ouster Followed Debates Between Altman, Board"
It wasn't Elop who drove Nokia to the state it was in 2009. "Burning Platform" is from 2011.
"GPT 4 is clearly AGI. All of the GPTs have shown general intelligence, but GPT 4 is human-level intelligence. "