OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.
Their entire alignment effort is focused on avoiding the following existential threats:
1. saying bad words 2. hurting feelings 3. giving legal or medical advice
And even there, all they're doing is censoring the interface layer, not the model itself.
Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.
I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.
[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...