It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.
Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.
OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.
Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.
Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.
Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.