I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.
I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.
EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662
Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.
I think him staying is bad for the field overall compared to OpenAI splitting in two.
What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?
I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].
The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.
And that's it.
Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.
And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.
The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.