The board was Altmans boss - this is pretty much their only job. Altman knew this and most likely ignored any questions or concerns of theirs thinking he is the unfireable superstar
Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee
The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired. So mostly this just seems like an egomaniac employee who thinks he is the company (while appropriating the work of some really really smart data scientists)
The board removed the board's chairman and fired the CEO. That's why it was called a coup.
>The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired
nonprofits don't have shareholders (or shares).
Going off and starting his own thing would be great, but it would be at least a year to get product out, even if he had all the same players making it. And that's just to catch up to current tech
Differences in interpretations will happen but the YC rule that founder drama is too often a problem continues to exist and it shouldn’t be a surprise.
Sam and Greg were trying to stage a coup, the rest of the board got wind of it and successfully countered in time (got to them first).
What they didn't expect is that a bunch of their own technical staff would be so loyal to Sam (or at least so prone to the cult of personality). Now they're caught in a Catch-22.
Think you're missing the big picture here. Sam Altman isn't an "easily replaceable employee" especially given his fundraising skills.
Edit: nvm I missed the point was about firing the board.
Except he is not. He was a cofounder of the company and was on the board. Your metaphor doesn't make any sense -- this is like if your boss fired you but also you were part of your boss and your cofounder who is on your side was the chair of your boss.
Not at all. Ilya and George are on the board. Ilya is the chief scientist, George resigned with Sam and supposedly works like 80-100hrs a week
Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
It's also not clear that this is a realistic scenario - Ilya is the real deal, and there's likely plenty of people that believe in him over Altman.
Of course, the company has also expanded massively under Altman in a more commercial environment, so there are probably quite a few people that believe in Altman over him.
I doubt either side ends up with the entire research organization. I think a very real possibility is both sides end up with less than half of what OpenAI had Friday morning.
• Employees
• Donors or whoever is paying the bills
In this case, the threat appears to be that employees will leave and the primary partners paying the bills will leave. If this means the non-profit can no longer achieve its mission, the board has failed.
I'm aware that Altman has made the same claim (close to zero equity) as you are making, and I don't see any reason why either of you would not be truthful, but it also has always just seemed very odd.
Especially considering OpenAI has boosted the value of the masses of data floating around the internet. Getting access to all that juicy data is going to come at a high cost for data hungry LLM manufacturers from here on out.
If that's the case, then the failing would be in letting it get to this point in the first place.
No wonder this is causing drama.
Pedantic, but: LLCs have "members", not "shareholders". They are similar, but not identical relations (just as LLC members are similar to, but different from, the partners in an partnership.)
Not everything is about money. He likely just likes the idea of making AI.
Sam has superior table stakes.
I think he stage his coup long ago when he took control of OpenAI making it “CloseAI” to make himself richer by effectively selling it to Microsoft. This is the people who believe in the original charter fighting back.
> The shareholders can fire the board, but that’s not what he’s asking for.
There are no shareholders in a non-profit if I’m right. The board effectively answers to no one. It’s take it or leave it kind of deal. If you don’t believe in OpenAI’s mission as stated in their charter, don’t engage with them.
That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)
https://openai.com/our-structure and particularly https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...
If I worked there, I would keep my job and see how things shake out. If I don’t like it, then I start looking. What I don’t do is risk my well being to take sides in a war between people way richer than me.
if theyve been doin that for a while, no wonder the board wanted them gone. eventually you cause more work than you put out.
Financial backing to make a competitor
Internal knowledge of roadmap
Media focus
Alignment with the 2nd most valuable company on the planet.
I could go on. I strongly dislike the guy but you need to recognize table stakes even in your enemy. Or you’ll be like Ilya. A naive fool who is gonna get wrecked thinking doing the “right” thing in his own mind will automatically means you win.
Altman was on the board. He was not “just another employee.” Brockman was also on the board, and was removed. It was a 4 on 2 power play and the 2 removed were ambushed.
You also don’t seem to realize that this is happening in the nonprofit entity and there are no shareholders to fire the board. I thought OpenAI’s weird structure was famous (infamous?) in tech, how did you miss it?
A true believer is going to act along the axis of their beliefs even if it ultimately results in failure. That doesn't necessarily make them naive or fools - many times they will fully understand that their actions have little or no chance of success. They've just prioritized a different value of you.
Ultimately this is good for competition and the gen-AI ecosystem, even if it's catastrophic for OpenAI.
It's a tricky situation (and this is just with a basic/possibly-incorrect understanding of what is going on). I'm sure it's much more complicated in reality.
Of course you can protest, “but in this country the constitution says that the generals can sack the president anytime they deem it necessary, so not a coup.” Yes, but it’s just a metaphor, so no one expects it to perfectly reflect reality (that’s what reality is for).
I feel we’ll know way more next week, but whatever the justifications of the board, it seems unlikely that OpenAI can succeed if the board “rules with an iron fist.” Leadership needs the support of employees and financial backers.
From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.
Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.
Both board members are smart enough that they will probably get what they want, they just want different things.
IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.
Anything in between these two options is hedging their bets and will lead to them getting eaten alive.
You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.
I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.
I do not believe it is possible for them to have thought this through. I believe they'll have read the governing documents, and even had some good lawyers read them, but no governance structure is totally unambiguous.
Something I'm immensely curious about is whether they even considered that their opposition might look for ways to make them _criminally_ liable.
Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.
pushing to call it a coup is an attempt to control the narrative.
That being said, this is a case of biting the hand that feeds you. An equivalent would be if a nonprofit humiliated its biggest donor. The donor can always walk away, claiming her future donations away, but whatever she's donated stays at the nonprofit.
Once the avalanche has stopped moving that's a free decision, right now it could be costly.
Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).
And, incidentally, if there is a criminal angle that's probably the only place you might possibly find it and it would take the SEC to bring suit: they'd have to prove that one or more of the board members profited from this move privately or that someone in their close circle profited from it. Hm. So maybe there is such an angle after all. Even threatening that might be enough to get them to fold, if any of them or their extended family sold any Microsoft stock prior to the announcement they'd be fairly easy to intimidate.
Here is what I understand by table stakes: https://brandmarketingblog.com/articles/branding-definitions...
If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.
As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.
Please get real.
I'd guess, OpenAI without Sam Altman and YC/VC network is toothless. And Microsoft's/VC/media leverage over them is substantial.
Why would they care about that?
For specific things like new words and facts this does matter, but I think they're not in real trouble as long as Wikipedia stays up.
I'm not sure that's actually true anymore. Look at any story about "growth", and you'll see plenty of skeptical comments. I'd say the audience has skewed pretty far from all the VC stuff.
Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.
"Table stakes" simply means having enough money to sit at the table and play, nothing more. "Having a big pile of GPUs is table stakes to contest in the AI market."
Specifically, cofounder strife is one of the major issues of startups that don’t get where they could.
If I recall it was Jessica Livingstone’s observation
Don't you think the board must have sought legal counsel before acting? It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
I don't think OpenAI board has any responsibility to care for Microsoft's stock price. Such arguments won't hold water in a court of law. And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
I’m just not sure it would be totally starting from scratch since there is more of a playbook and know how.
What evidence were you expecting to find? The board said that Sam wasn't candid with his communication. I've yet to see any evidence that he was candid. Unless the communication has been recorded, and somehow leaks, there won't be any evidence that we can see.
He talks about how learning ML made him feel like a beginner again on his blog (which was a way for him attract talent willing to learn ML to OpenAI) https://blog.gregbrockman.com/its-time-to-become-an-ml-engin...
And the evidence that we've seen so far doesn't refute the idea that the board isn't seriously considering taking him back on. The statements we've seen are entirely consistent with "there was a petition to bring him back sent to the board and nothing happened after that."
The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
My curiosity stems from whether the board was involved in signing the contract for Microsoft's investment in the for-profit entity, and where the state might set the bar for fraud or similar crimes. How was the vote organized? Did any of them put anything in writing suggesting they did not intend to honor all of the terms of the agreement? Did the manner in which they conducted this business rise the level of being criminally negligent in their fiduciary duty?
I feel like there are a lot of exciting possibilities for criminality here that have little to do with the vote itself.
... and also +1 to your whole last paragraph.
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
They control an entity that accepted $10B from Microsoft. Someone signed that term sheet.
100 hours is equal to 2 full-time jobs and a half time. People believing that number should consider how they would live going to their second job after their day ends (second full-time job) and working on weekends as well (half-time one).
Under ideal conditions, someone might be doing it. But, people shouldn't be throwing around these numbers without any time-tracking evidence.
Not necessarily. An unpopular leader can be even easier to overthrow, because the faction planning the coup has a higher chance of gaining popular support afterward. Or at least they can expect less resistance.
Of course, in reality, political and/or military leaders are often woefully bad at estimating how many people actually support them.
> Someone signed that term sheet.
Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?
One can imagine Microsoft, for example, swooping in and acquiring a larger share of the for-profit entity (and an actual seat on the board, dammit) for more billions, eliminating the need for any fundraising for the foreseeable future.
If a lot of top engieers follow sama out, now that's a real problem.
> Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?
There’s nothing binding on a term sheet.
Another way to think about these is that companies are basically small countries.
If Altman takes all of the good engineers and researchers with him, OpenAI is no more.
So the board can be the boss of nothing, sure, without the ability to do anything - leading the organisation, raising funds, and so on
Perhaps they could hire someone that could replace Sam Altman, but, that would require a much larger company who have the employees indifferent to the leadership, like, EA or something
OpenAI is much more smaller and close knit.
They probably should have, but they may have not.
> It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.
It is. But having the legal rights to do something and having it stand unopposed are two different things and when one of the affected parties is the proverbial 900 pound Gorilla you tread more than carefully and if you do not you can expect some backlash. Possibly a lot of backlash.
> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.
Not formally, no. But that isn't what matters.
> Such arguments won't hold water in a court of law.
I'll withhold comment on that until I've seen the ruling. But what does and does not hold water in a court of law unless a case is extremely clear cut isn't something to bet on. Plenty of court cases that have been won because someone managed to convince a judge of something that you and I may think should not have happened.
> And I don't think the power of Microsoft's legal department would matter when there's no legal basis.
The idea here is that Microsofts - immense - legal department has the resources to test your case to destruction if it isn't iron-clad. And it may well not be. Regardless, suing the board members individually is probably threat enough to get them to back down instantly.
We had the whole thing - including the JV - reversed in court in spite of them having the legal right to do all this. The reason: the judge was sympathetic to the argument that apparently the JV was a sham created just to gain access to our code. Counterparty was admonished, a notary public that had failed their duty to act as an independent got the most thorough ear washing that I've ever seen in a court and we got awarded damages + legal fees.
What is legal, what you can do and what will stand up are not always the same thing. Intent matters. And what also really matters is what OpenAI's bylaws really say and to what extent the non-profit's board members exercised their duty to protect the interests of the parties who weren't consulted and who did not get to vote. This so called duty of care - here in NL, not sure what the American term is - can weigh quite heavily.
The confidentiality part and the 'no shop' part of a terms sheet are definitely binding and if you break those terms you'll be liable for damages.