Would you trust someone who doesn't believe in responsible governance for themselves, to apply responsible governance elsewhere?
The real teams here seem to be:
"Team Board That Does Whatever Altman Wants"
"Team Board Provides Independent Oversight"
With this much money on the table, independent oversight is difficult, but at least they're making the effort.
The idea this was immediately about AI safety vs go-fast (or Microsoft vs non-Microsoft control) is bullshit -- this was about how strong board oversight of Altman should be in the future.
They haven't really said anything about why it was, and according to business insider[0] (the only reporting that I've seen that says anything concrete) the reasons given were:
> One explanation was that Altman was said to have given two people at OpenAI the same project.
> The other was that Altman was said to have given two board members different opinions about a member of personnel.
Firing the CEO of a company and only being able to articulate two (in my opinion) weak examples of why, and causing >95% of your employees to say they will quit unless you resign does not seem responsible.
If they can articulate reasons why it was necessary, sure, but we haven't seen that yet.
[0] https://www.businessinsider.com/openais-employees-given-expl...
When 95% of your staff threatens to resign and says "you have made a mistake", that's when it's time to say "no, the very good reasons we did it are this". That didn't happen.
Microsoft can and will be using GPT4 as soon as they get a handle on it, and if it doesn't boil their servers to do so. If you want deceleration you would need someone with an incentive that didn't involve, for example, being first to market with new flashy products.