I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.
Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!