Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.
Based on those posts from OpenAI, Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.
???
I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.
Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.
Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.
All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.
Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.
Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.
He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.
[0]Never attribute to malice that which is adequately explained by incompetence.
It's been a busy weekend for me so I haven't really followed it if more has come out since then.
On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.
I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.
The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.
I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.
It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.
For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"
In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.
In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.
Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.
I'm not joking.
This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.
Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.
My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.
OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities
Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.
The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech
That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.
The stakes are gigantic. They may even have AGI cooking inside.
My interpretation is relatively basic, and maybe simplistic but here it is:
- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.
- Adam was alarmed by GPTs competing with his recently launched Poe.
- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.
- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.
That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.
Because using only one is pretty cool.
Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.
Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?
It is a great time to be a lobbyist.
But I heard it usually take 5~ days to show there anyway.
Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.
This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.
OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.
A terrible mess.
They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives
Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.
If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.
In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"
The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.
They just haven't gotten big or rich enough yet for the rot to set in.
Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.
I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)
That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."
https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...
The most organized and professional silicon valley startup.
Bought-out executives eventually join MS after their work is done or in this case, they get fired.
A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.
The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.
And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.
I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...
“You are fanciful, mon vieux,” said M. Bouc.
“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”
“That respectable American LLM?”
“That respectable American LLM.”
“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”
Where is OpenAI talent going to go?
There's a list and everyone on that list is a US company.
Nothing to worry about.
Also when I said "cooking AGI" I didn't mean an actual superintelligent being ready to take over the world, I mean just research that seems promising, if in early stages, but enough to seem potentially very valuable.
There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.
The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.
Maybe for another, longer lived example, see AIG.
Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.
I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.
AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.
AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.
For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.
This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.
I'm not sure we could ever truly know for certain
It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.
Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"
> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.