- Board is mostly independent and those independent dont have equity
- They talk about not being candid - this is legalese for “lying”
The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.
My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.
Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.
It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.
[1] >>35960125
Thank you. I don't see this expressed enough.
A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.
There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.
The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.
If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.
And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.
Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.
How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?
Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.
Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.