This is really, really clearly incestuous tech media stuff as part of a pressure campaign. Sam is the darlin of tech media and he's clearly instigated this reporting because they're reporting his thoughts and not the Board's in an article that purports to know what the Board is thinking, the investors who aren't happy (the point of a non-profit is that they are allowed to make investors unhappy in pursuit of the greater mission!) have an obvious incentive to join him in this pressure campaign, and then all he needs for "journalism" is one senior employee who's willing to leave for Sam to instead say to the Verge that the Board is reconsidering. Boom, massive pressure campaign and perception of the Board flip flopping without them doing any such thing. If they had done any such thing and there was proof of that, the Verge could have quoted the thoughts of anyone on the Board, stated it had reviewed communications and verified they were genuine, etc.
1. Met with every major head of state except for Xi and Putin. He is the face of AI, not just for OpenAI, but for the entire world. The entire AI industry would hate for this to happen. 2. Lead a company from 2 billion valuation to nearly 80 billion in a year.
There is no precedent in startup history to get rid of a CEO at this stage.
It was a mandate. 2/3 the board voted in favor of relieving Sam Altman of his obligation to the company. The question now is why and how that plays out. It is clearly what the board wanted
But the board seems to have a weak hand. It can decide to disappoint the for profit investors. But it doesn’t own Sam, or the vast majority of the workers, and maybe not much of the know how. And they can walk if the board disappoints them.
The board’s altruism might be great, but it lacks the legal tools to do what it wants, against organized labor backed by unlimited capital.
Stop making up nonsense please.
He did none of the research that fuels OpenAIs ambitions and future prospects, thats mostly done by people like Sutskever, Radford and many more brilliant scientist.
The same media that promoted the skizoid idea that agi is around the corner and blew ai out of proportion.
The same media that would not hesitate to do character assassinations of people opposing altman.
The media is corrupt and incompetent. To be replaced soon by the monster they created.
Though I think it’s best to refrain from calling something a “dumb take”.
There’s no evidence of that, only your assumptions. Lots of comments from knowledgeable folks outside the media and who couldn’t care less about a “pressure campaign”, even if it did exist, think the board was clueless and got duped into making a huge mistake with the coup.
Sam Altman was fired. 4 other key people quit and it seems more will follow and join Sam's new venture. This outcome would be a disaster for Microsoft, for other OpenAI investors and for OpenAI. So the board is, per multiple sources, talking with Sam Altman to return. The board declined to comment and is free to clarify any inaccuracies.
There's no need for a spin, the board has miscalculated and got itself in a bad spot.
Are you part of OpenAI governance, or any company's governance structure? If not, does it really matter whether someone is exchangeable or not for you?
A non-profit isn’t supposed to have investors. This structure should never have been allowed in the first place (nor IKEA.)
Good job if you can get it.
What remains to be seen is just how closely the board holds the charter to their hearts and whether the governance structure that was built is strong enough to withstand this.
> There’s no evidence of that
The leaks themselves whether or not based in fact are evidence of that. The only reason for someone in a position to be taken credibly reporting the information contained in either this Verge article or the Bloomberg article with a similarly sourced but slightly different narrative, to take that information to the media, whether or not it is true, is to use public pressure to attempt to shape the direction of events.
EDIT: To be clear, its evidence of the "pressure campaign" part; to the extent that the "incestuous tech media" part has any substantive meaning, I'm not sure its evidence of that.
The non-profit doesn't have investors. OpenAI Global, LLC isn't the non-profit, its a for-profit over which the non-profit has complete governance control.
Dependable leaders really do have that much value to their organizations. This is similar to why in critical areas like medicine, old-and-dependable things are valued over new and shiny. The older things have lower risk, and a strong track record. That added dependability is more important than being the newer “better” but riskier option. Back to this topic, how many CEOs with track records managing 80 billion revenue AI organizations are ready to replace Altman? Because Open AI is well ahead in the field, they don’t need big risky changes, they need to reliably stay the course.
At a minimum, taking your largest supplier and customer for a ride is probably a bad idea.
They don't want to run a developer/enterprise ChatGPT platform.
Google cares about Search, Apple about Siri, Meta about VR/Ads. But those three are interesting heavily in their own LLMs which at some point may better OpenAI.
But non-profits aren't a regular business and their ultimate obligation is to their charter. Depending on just what the level of misalignment was here, it's possible that the company becoming nonviable due to terminating Altman is serving the charter more closely than keeping him on board.
No one posting here has enough detail to really understand what is going on, but we do know the structure of OpenAI and the operating agreement for the for-profit LLC make it a mistake to view the company from the lens as we would a regular for-profit company.
Never been a fan of the “you can’t complain about any bad outcome you agreed could happen” argument.
I am not American and have no idea what you are talking about.
Sam Altman channeled what was great research into a dominant $100b business in record time.
That is not trivial and not every CEO can do that.
If they do nothing, then public perception harms their ability to raise further capital, and employees leave for Altman's new company. If they cave to the pressure (despite that being objectively the financially right decision), they lose their board seats and Sam comes back, proving they overplayed their hand when they fired him. They're basically in a lose/lose situation, even if this article is sourced from entirely biased and fabricated information. And that's exactly what reveals them as incompetent.
Their mistake was making a move while considering only the technicalities of their voting power, and ignoring the credibility they had to make it. Machiavelli is rolling in his grave...
The supply bottlenecks have been around commercializing the ChatGPT product at scale.
But pretraining the underlying model I don't think was on the same order of magnitude, right?
Of course it’s legal, the comment was that it shouldn’t be.
But if you sign an agreement saying you understand you should treat your investments more like donations and that everything is secondary to the goals of the non-profit and then are upset that your goals were not placed in higher priority than the charter of the non-profit, I'm going to reserve the right to think you're a hypocrite.
You lose other actors who only joined to work with Brad for one. You lose part of your audience and you lose distribution and press opportunities.
If it wasn't for Sam pushing for a version that became gpt3.5 and the popularity that followed and most recently gpt 4 push we would still be waiting for the brilliant people . Google was way ahead in this space but failed to release anything.
As a developer I understand belittling the business side as providing little value but as someone who has tried to get the masses to adopt my software my respect for their ability to solve non-technical problems has grown.
In any lens if microsoft pulls their GPUs and funding, then OpenAI is through.
No, pissing microsoft off in this situation is not a good idea. Because microsoft can shut the whole organization down.
But if Altman has a new venture that takes first mover advantage on a whole different playing field MS could easily get left in the dust.
Microsoft nor anyone else said they deeply believed in and prioritized OpenAI’s charter over their own interests. They might have agreed to it, and they must abide by agreements, but this is not a case of claiming one set of principles while acting contrary to them.
Moreover, there is an impartiality issue here in the tech press. A lot of the tech press disagree with the OpenAI Charter and think that Sam's vision of OpenAI as basically Google but providing consumer AI products is superior to the Charter, which they view in incredibly derogatory terms ("people who think Terminator is real"). That's fine, people can disagree on these important issues!
But I think as a journalist it's not engaging fairly with the topic to be on Sam's political side here and not even attempt to fairly describe the cause of the dispute, which is the non-profit Board accusing Sam Altman of violating the OpenAI charter which they are legally obligated to uphold. This is particularly important because if you actually read the OpenAI Charter, it's really clear to see why they've made that decision! The Charter clearly bans prioritising commercialisation and profit seeking, and demands the central focus be building an AGI, and I don't think a reasonable observer can look at OpenAI Dev Day and say it's not reasonable to view that as proof that OpenAI is no longer following its charter.
Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.
This is wishful thinking. If an employee is inclined to follow the innovation, it's clear where they'll go.
But otherwise, the point you raise is a good one: this is about the charter of the board. Many of us are presuming a financial incentive, but the structure of the company means they might actually be incentivized to stop the continued development of the technology if they think it poses a risk to humanity. Now, I personally find this to be hogwash, but it is a legitimate argument for why the board might actually be right in acting apparently irrationally.
The Board has the power to determine whether Sam is fulfilling his fiduciary duty and whether his conflicts of interest (WorldCoin, Humane AI, etc) compromise broad benefit.
"OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period."
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."
Given the complex org structure - I wouldn’t be surprised if the non-profit (or at least it’s board) wasn’t fully aware of the contract terms/implications.
I think you might have better luck grasping the situation if you put a little bit more effort into understanding it rather than jumping to put words in the mouths of others. Nobody said whether they support the non-profit charter or not in the first place, and as far as the phenomena of what's happening right now, the non-profit charter has nothing to do with it.
550 of 700 OpenAI employees have just told the board to resign. Altman is going to MSFT and taking his org with him. Regardless of what the board says, who do you think really has the power here -- the person who has and already had the full support of the org he built around him, or a frankly amateurish board that is completely unequipped for executing on a highly public, high stakes governance task presented in front of it?
Unfortunately, not only can you cannot charter public opinion, but those who try often see it backfiring by making clear their air of moral superiority rather than leaning on their earned mandate to govern the rank and file they are supposed to represent. The board, and it seems you, will simply be learning that lesson the hard way.