To explain, it's the board of the non-profit that ousted @sama .
Microsoft is not a member of the non-profit.
Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.
Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.
But not a change of control in the for-profit company they invested in.
To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.
Microsoft is not happy with that change of control and they favourited the previous representative of their partner.
Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.
And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.
This sounds like highly unethical and potentially illegal to me.
How come no one is pointing that out?
Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?
What does it say about the seriousness of it all?
But of course, that's Silicon Valley baby.
Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.
They are likely valued a lot less than 80 billion now.
OpenAI had the largest multiple - >100X their revenue for a recent startup.
That multiple is a lot smaller now without SamA.
Honestly the market needs a correction.
Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.
Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.
While OpenAI would have the IP, they would also need to retain the right people who understand the system.
The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.
Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.
Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.
And if that corporate structure does not suit Satya Nadella, I would say he's the one to blam for having invested 10B in the first place.
Being angry at a decision he had no right to be consulted on does not allow him to meddle in the governance of its co-shareholder.
Or then we can all accept together that corruption, greed and whateverthefuckism is the reality of ethics in the tech industry.
OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.
SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.
Enough with the hero worship for SamA and all the other salesmen.
This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]
If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.
The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.
I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.
We don’t know what was said, and what was signed. To put the blame with microsoft is premature.
This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.
While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?
Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.
In other words, MS has the losing hand here and CEO of MS is bluffing.
I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.
OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.
Their entire alignment effort is focused on avoiding the following existential threats:
1. saying bad words 2. hurting feelings 3. giving legal or medical advice
And even there, all they're doing is censoring the interface layer, not the model itself.
Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.
I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.
[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...