> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.
Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.
Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.
Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.
OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.
As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.
(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).
While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.
Some background: https://sl.bing.net/dEMu3xBWZDE
From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.
I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.
Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.
(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)
Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.
If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.
For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.
More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.
So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".
How far along were they on GPT-5?
Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.
surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?
Pick a different target and move on.