Still though, this isn't something that will just go away with Sam back. OAI will undergo serious changes now that Sam has shown himself to be irreplaceable. Future will tell but in the long terms, I doubt we will see OAI as one of the megacorps like Facebook or Uber. They lost the trust.
This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was
- Altman tries to push out another board member
- That board member escalates by pushing Altman out (and Brockman off the board)
- Altman's side escalates by saying they'll nuke the company
Altman's side won, but how can we say that his side didn't cause any of this instability?
See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.
It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.
She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.
She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.
"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.