To explain, it's the board of the non-profit that ousted @sama .
Microsoft is not a member of the non-profit.
Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.
Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.
But not a change of control in the for-profit company they invested in.
To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.
Microsoft is not happy with that change of control and they favourited the previous representative of their partner.
Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.
And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.
This sounds like highly unethical and potentially illegal to me.
How come no one is pointing that out?
Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?
What does it say about the seriousness of it all?
But of course, that's Silicon Valley baby.
They are likely valued a lot less than 80 billion now.
OpenAI had the largest multiple - >100X their revenue for a recent startup.
That multiple is a lot smaller now without SamA.
Honestly the market needs a correction.
SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.
Enough with the hero worship for SamA and all the other salesmen.
The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.
Their entire alignment effort is focused on avoiding the following existential threats:
1. saying bad words 2. hurting feelings 3. giving legal or medical advice
And even there, all they're doing is censoring the interface layer, not the model itself.
Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.
I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.