All of the engineers, Sam, and Greg are probably entirely reasonable. If you really wanted to ensure safety like it always has been, you can express your concerns and get basically what you wanted.
They will pay up the bill: https://openai.com/blog/introducing-superalignment
If you disagreed on what would lead to AGI, LLM vs more components, then you can just see it play out. Same thing as the specific transformer being a light at the end of the tunnel that OpenAI pivoted to, the researchers will find what makes the AI more intelligent over time.
Only if you wanted to entirely stop the AI development would this occur for you to do. But this is probably a minimal goal if you are a researcher, you want to keep researching. Instead, only if you wanted to stop OPENAI's AI, would you do this.
At the end of the day, the board probably was a conflict of interest, and had no real concerns. Power grab 101.