It's pretty basic that when you fire someone abruptly they _do not_ get to come back into the damn office.
The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.
"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
He knows the reason, it's not safety, but he's not allowed to say what it is.
Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.
I must say though, going by his tweets, Andrej Karpathy isn't all too impressed with the Board. So, that's there too.
But he just got the job and I'm sure many people are on PTO/leave for holidays. Give the guy some time. (And this is coming from someone who is pretty bearish on OpenAI going forward, just think it's fair to Shear)
Can you explain what is meant by the word safety?
Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?
What could the reason be that would justify this kind of wait?
I'll point out that Sam also doesn't seem to want to say the reason (possibly he's legally forbidden?). And all of the people following him out of OpenAI don't know, and are simply trusting him enough to be willing to leave without knowing.
In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.
"AI: I am sorry, I can't provide this information."
The best explanation I've seen is that Ilya is ok with commercializing the models themselves to fund AGI research but that the announcement of an app store for Laundry Buddy type "GPTs" at Dev Day was a bridge too far.
AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).
Or maybe more dystopian…
AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.
If you are a customer, arrange to use alternative services. (It's always a good idea to not count one flaky vendor with a habit of outages and firing CEOs.)
If you are just eating popcorn, me too, pass the bowl.
It's mainly about who is allowed to control what other people can do, i.e. power.