Ilya losing access to the GPUs he needs to do his research so that the company can service a few more customers seemed like a fundamental betrayal to him and a sign that Sam was ignoring safety in order to grow marketshare.
If Elon is able to promise him the resources he needs to do his research then I think it could work out.
All of them would have left if Sam left, if anything letting Sam go would significantly hamstring OpenAI than letting Ilya go.
I can’t wait to read the autobiography of involved parties.
Who on earth would ever trust an Elon promise at this point? The guy literally can’t open his mouth without making a promise he can’t keep.
Unless Ilya is getting something in a bulletproof contract and is willing to spend a decade fighting for it in court, he’s an idiot doing anything with Elon.
Ilya may still be someone who should be on the board... Especially given his role as head of alignment research. He deserves a say on key issues related to OpenAI.
People get excited. Stupid things happen. Especially in startups.
ChatGPT having become so successful doesn't change the fact that the company as a whole, is fairly immature still.
They should seriously just laugh about it and move on.
Let's just say that Ilya had a bad couple of days, and probably needs a couple of weeks of vacation.
I can absolutely empathize with Ilya here, though. As far as I know the tech making openai function is largely his life’s work. It would be extremely frustrating to have Sam be the face of it, and be given the credit for it.
Sam is clearly a very accomplished businessman and networker. Those people are super important, I wish I had a person like him on my team.
I’ve had the experience of other people tacitly taking credit for my work. Giving talks about it, receiving praise for their vision. It’s incredibly demoralizing.
I’m not necessarily saying Sam did this, since I don’t know any of these people. Just speculating on how it might feel to ge Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work.
It's probably more of an intellectual / philosophical position, given that they just did not think through the real impact on the business (and thus the mission itself)
I'm inclined to assume that something stupid was done. It happens. They should resolve it, fix the rules for how the board can behave, and move on.
Despite the bungling, Ilya is probably still a good voice to have on the board. His key responsibility (super-alignment), is a key part of OpenAI's mission.
Plus, the other board members supported him, so decent blame to go around for this embarrassment.
It's why he fell out and left OpenAI despite investing $100 million to start it.
I'd say he's well aligned with Ilya's position. Early on I wondered if he was an instigator of the entire board coup.
I also wouldn’t want Ilya in there without checks and balances, to be clear. So the challenge is identifying the right adults.
I don’t think it’s realistic to expect that negotiation to complete successfully in the eyes of all parties by 5 PM today. It’s possible that Ilya will give up on having his requirements satisfied and leave.
Sam is backed by investors who are looking for returns, and are not sure if Ilya will get them the same juicy 100X.
So, if Sam comes back, then I’m pretty sure Ilya will go on his own. Whether he will focus on GPT or AGI or ?, is anyone’s guess, as is how many from OpenAI will follow him as everyone loves money.
EDIT: Ilya should have no trouble finding benefactors of his own, whether they are one of the FAANGs or VCs is TBD.
He's pretty bad at honoring contracts too
Giving birth to an idea is a necessary condition and sets the boundaries for so much of what it can achieve. But if you're unable to raise it to become a world champion, it isn't worth anything.
I've been on the raising ideas side way more in my 20+ career in tech. I know some people became bitter and scornful of me because I pushed their ideas to become something big and received a lot of credit for that. And I try to give credit where credit is due. But often enough, when I try to share the spotlight (in front of a customer or when presenting at BoD, for example), the brilliant engineer withers under pressure or actively harms his idea by pointing out its flaws excessively. It's a delicate balance.
"Just speculating on how it might feel to Ilya watching Sam go on a world tour meeting heads of state to talk about what is largely Ilya’s work."
The whole point of a CEO is to do this kind of stuff. If your best engineers are going on world tours, talking to politicians, and preparing for keynotes, that's a pretty terrible use of their time. Not to mention that most of them would hate doing it.
and also Bitcoin might be the exception that proves the rule - every other chain or token is managed by a few insiders taking get-rich-quick marks for a ride.
Plenty of precedent for tech founders to have total board control. It will take a little while for Sam to consolidate power, but he won't forget what happened this weekend and he'll play the long game accordingly.
As an example, couple years ago Crisis Text Line decided to sell data to a for profit spin off. Their justification was that data was anonymized, which was bs for it’s unstructured text data, and that it’s not against terms of service, which users had agreed to. Mind you, these users were people in crisis maybe even on a brink of a suicide. This was highly unethical and caused a backlash. Then one of the bod members wrote a half assed “reflection” post [1]. If some core employees of CTL did a “coup” to stop this decision, because they believed it’s unethical and dangerous, wouldn’t it be justifies?
[1] http://www.zephoria.org/thoughts/archives/2022/01/31/crisis-...
Sama also went on Lex and got over 5M views. The title was: OpenAi ceo on, ChatGPT, GPT4, and the future of AI.
Done.
Any actual AI takeover will be boring and largely voluntary. For certain definitions of voluntary.
So yeah, Ilya is a very known entity. No, ordinary folks don't need to know him, but if you are in IT and especially if you have anything to do with AI, then not knowing about Ilya tells more about your informational bubble than about Ilya's alleged lack of recognition.
It is akin to claiming to be into crypto on development side and not knowing the name of Vitalik Buterin.
It's like imagine a guy has a nice idea to cure cancer, but plays the princess with it and refuses to industrialize it, while people are dying left and right. Surely, it becomes indefensible, and at some point, someone brave will do the right thing and implement the idea. You have a right to reap the benefit of your ideas but you have a duty not to deprive humanity of any benefit just because you thought of it first, I feel ?
My favorite was Rainbow MosAIc, a Rashomon style film taking place mainly from Friday to Monday. It played with all the different potential motivations and theories. It did a half decent metaphor with representing the different points of view via the different video conferencing cameras.
Even the recent OpenAI profile in one of prominent publications covered Mira, Ilya and gdb in addition to Sam.
But the fundamental question is why would a researcher expect (if they do) that they will be as well known as the CEO who is the face of organisation?
How is that guaranteed? If investors remove him from board of directors, he may get pissed off and quit, no?
> In addition, no one is irreplaceable.
In theory, maybe. In practice, it is not always easy. Nearly a year after ChatGPT came out Google hasn't been able to catch up. If it was easy to replace Ilya after he left Google, they would have caught up by now.
If it weren't for the mentality you are rallying against we wouldn't have ChatGPT. Google, Meta, everyone had these LLMs sitting around. OpenAI was the only company with the balls to release it to the public.
The communication was certainly very poor, and we don't know if the reasons were good, but I don't understand the speed complaint.
Investment is only partially about trust. I agree Sam's a pretty investable guy. I expect Sam to pursue growth through fundraising, product commercialization, corporate partnerships, etc in exactly the YC mode. He's also clearly ok with letting the momentum of that growth overwhelm the original stated aims of OpenAI, especially given what the original firing press release said about Sam not being entirely forthright. I suspect Microsoft made their investment knowing that something like this might happen. It's not trustworthy that he tried to overwhelm nonprofit aims under for-profit momentum, but if you're an investor do you care?
> And Musk proposed a possible solution: He would take control of OpenAI and run it himself.
For instance:
https://www.seattletimes.com/business/paul-allen-goes-after-...
In the case of Tesla, to accelerate the development of electric cars, in the case of Twitter, to reduce the probability of civil war and in the case of SpaceX to eventually have humanity (or our descendants) spread out enough that a single catastrophic event (like a meteor, gray goo or similar) doesn't wipe us out all at once.
His detractors obviously will question both his motives and methods, but if we imagine he's acting out of good faith (whether or not he's wrong), his approach to AI fits the pattern, including his story about why he helped with the startup of OpenAI in the first place.
From someone with an ex-risk approach to AI safety, the first concern is, to quote Ilya from the recent Alignment Workshop "As a bare minimum, let's make it so that if the tech does 'bad things', it's because of its operators, rather than due to some unexpected behavior".
In other words, for someone concerned with existential risk, even intentional "bad use" such as using AI for killer robots at a large scale in war or for a dictator to use AI to suppress a population are secondary concerns.
And it appears to me that Elon and Ilya both have this outlook, while Sam may be more concerned with shorter term social impacts.