Ilya losing access to the GPUs he needs to do his research so that the company can service a few more customers seemed like a fundamental betrayal to him and a sign that Sam was ignoring safety in order to grow marketshare.
If Elon is able to promise him the resources he needs to do his research then I think it could work out.
Who on earth would ever trust an Elon promise at this point? The guy literally can’t open his mouth without making a promise he can’t keep.
Unless Ilya is getting something in a bulletproof contract and is willing to spend a decade fighting for it in court, he’s an idiot doing anything with Elon.
It's why he fell out and left OpenAI despite investing $100 million to start it.
I'd say he's well aligned with Ilya's position. Early on I wondered if he was an instigator of the entire board coup.
He's pretty bad at honoring contracts too
and also Bitcoin might be the exception that proves the rule - every other chain or token is managed by a few insiders taking get-rich-quick marks for a ride.
> And Musk proposed a possible solution: He would take control of OpenAI and run it himself.
In the case of Tesla, to accelerate the development of electric cars, in the case of Twitter, to reduce the probability of civil war and in the case of SpaceX to eventually have humanity (or our descendants) spread out enough that a single catastrophic event (like a meteor, gray goo or similar) doesn't wipe us out all at once.
His detractors obviously will question both his motives and methods, but if we imagine he's acting out of good faith (whether or not he's wrong), his approach to AI fits the pattern, including his story about why he helped with the startup of OpenAI in the first place.
From someone with an ex-risk approach to AI safety, the first concern is, to quote Ilya from the recent Alignment Workshop "As a bare minimum, let's make it so that if the tech does 'bad things', it's because of its operators, rather than due to some unexpected behavior".
In other words, for someone concerned with existential risk, even intentional "bad use" such as using AI for killer robots at a large scale in war or for a dictator to use AI to suppress a population are secondary concerns.
And it appears to me that Elon and Ilya both have this outlook, while Sam may be more concerned with shorter term social impacts.