Really wonder what this is all about.
Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.
FWIW, radio silence from Ilya on twitter https://twitter.com/ilyasut
https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
Edit: Maybe this is a reasonable explanation: >>38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.
https://www.economist.com/business/2006/05/11/flat-pack-acco...
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
The discussion: >>38309611
I am genuinely flabbergasted as to how she ended up on the board. How does this happen?
I can't even find anything about fellow board member Tasha McCauley...
Many people in AI safety are young. She has more professional experience than many leaders in the field.
> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."
> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."
[source: https://twitter.com/karaswisher/status/1725702501435941294]
Sounds like you exactly predicted it.
It is:
> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
https://openai.com/blog/openai-announces-leadership-transiti...
Dang! He left @elonmusk on read. Now that's some ego at play.
This page provides confirmation that your request is processed: https://privacy.openai.com/policies
https://time.com/collection/time100-ai/6309033/greg-brockman...
I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.
> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.
> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.
https://x.com/sama/status/1725748751367852439
Though any fund containing MSFT must be correlated.
She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.
Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...
She's also co-authored several of the most famous "survey" papers which give an overview of AI safety methods: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22h...
She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.
(Copy-pasting this comment from another thread where I posted it in response to a similar question.)
> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.
> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs
Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.
https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...
https://www.theverge.com/2017/10/19/16503076/oracle-vs-googl...