Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more
Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]
Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]
Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)
Nov 17 - OpenAI fire Altman
Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.
This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]
Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.
Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.
edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.
[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...
[1] https://twitter.com/sama/status/1724626002595471740
[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...
[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...
At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)
The announcement: https://openai.com/blog/openai-announces-leadership-transiti...
“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. …..
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“
Previous members: https://openai.com/our-structure
“Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”
There is no way that sama is the only person in this set of people to have unique information on critical privacy incidents or financials or costs of server operations, because these issues don't originate with him.
If some version of this turned out to be true, I would be seriously confused about ground truth transparency in the company and how the fuck they set the whole thing up, that this was an option. But again, this is why I'd say: Implausible.
Hence, they trust her to take on the interim role.
Again, all speculative.
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."
So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
"review process by the board, which concluded that he was not consistently candid in his communications with the board"
OK, so they tell us he was lying, which is precisely what "not consistently candid in his communications" means.
Possible topics for lying:
* copyright issues to do with ingestion of training data
* some sort of technical failure of the OpenAI systems
* financial impropriety
* some sort of human resources issue - affair with employee
* other - some sort of political power play? Word from Satya Nadella - "get rid of him"?
Possibly the reason is something that the board members felt exposed them personally to some sort of legal liability, thus if they did not act then they would have to pay a legal price later.
It has to be pretty serious to not make it public.
However big his transgressions may be, it's actual impact is finite, while the speculation can be infinite.
Ai doesnt “learn”, it depends on data. The more the better. This guy wanted to get as much as possible to make their chat bot appear more intelligent at all cost.
I have the strong suspicion we will see a bunch of revelations soon some covering what i stated above.
EDIT:
episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,
"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"
You misunderstand how these corporate situations work. He will fall upward to a better job someplace else if he chooses.
Adam Neumann, who started then destroyed WeWork, already raised $350 million from Andreessen Horowitz for another real estate company called Flow.
Regardless of what, the longer OpenAI waits to explain, the more it could damage corporate and developer trust in using its AI.
Well, he did get a few billion dollars of lesson on how to not run such a company, making him quite uniquely qualified for this position.
All these other conspiracies are ridiculous and do not at all reflect much simpler, economics-driven realities that the board's backers - investors - are interested in.
It's like that Altman and Brockman wanted to take an economically positive offer now, say a complete buyout from Microsoft, and the rest of the board wanted to do an additional fundraising round that would be far less cash but a far higher valuation. Now that the private fundraising is probably signed, those guys are out.
In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.
“We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”Pretty much nothing changed positively or significantly after Snowden revelations, Panama papers etc etc
Maybe Sam lied about his personal life to the board, and now it's impacting business?
Didn't we just have a topic here on HN how not disclosing the breach within 4 days is a securities fraud? Since Nov 9 there has been more than 4 days, so either there was no (material) breach, or Microsoft committed securities fraud and somehow expects to get away with it.
I'll argue in this day and age, that any founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is, will almost always get another shot. If SBF or Elizabeth Holmes weren't physically in prison, I bet they'd be able to get investment for whatever their next idea is.
It says he lied, explicitly, just with slightly nicer words. Whether he did or not, that is the definitive reason the board is giving.
https://finance.yahoo.com/news/softbank-takes-14b-hit-wework...
Adam is good making people rich, but those people are not his investors.
Neumann and Holmes and SBF lost their benefactors money.
The claim is that investors are interested in executives who they perceive to have created billions in value, and that's analogous to how NFL teams are interested in people who run fast.
I'd say the opposite; given the way CEOs usually part with firms even after misconduct investigations, it needs to be very serious for the “not consistently candid with the board” to be made public (it needs to be mildly serious for it not be hidden under a veil of “resigned to spend more time with his family/pursue other interests/pet his llama" but instead openly be a dismissal where the board “no longer has confidence in his ability to continue leading”.)
Yes: suggesting he was not as candid as necessary is business libel unless true.
And since Brockman was also booted, he may have been involved.
It's not clear what the Board was trying to do that he interfered with. There is no clear legal standard on what a CEO must divulge, and CEO's often get to wait to tell board members bad news until the whole board meets and the issue has been investigated.
OpenAI and ChatGPT are great and gets a lot of mind-share. But they are far from the only game in town and, at this still very-early stage of the tech cycle, the outwardly-visible leader can easily change in months.
I would think it is some kind of assets transferring, maybe the model, maybe the data, to party that is not disclosed to the board.
Other reasons, like you listed above, warrants an investigation and the board might have the incentive to bury it.
NFL teams are interested in players that can actually run fast, not players that can say they do, but are found to be lying and it turns out they cannot run fast causing the team to lose.
> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.
So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.
I doubt anything can damage the almost religious belief in chatgpt today. The inertia is huge.
(1) Unless there is public litigation in involved, OpenAI will not disclose the reason in susbtantial detail.
(2) It will not, more than momentarily, disrupt the whole AI market if they do not.
(If it is something that will disrupt the whole AI market, there is likely to be litigation and public information about the basis of the firing.)
Investors are interested in people they can use to make money. The latter are easier to use, but the former will suffice. It just depends on when you sell.
Others are "think" and "conscious".
I take "fall upward" to be a typo of "fail upward".
The next sentence explicitly compares the situation to WeWork.
My interpretation is correct, it's a bizarre post, I'm done with this thread, have a good day.
The Auth/DDoS event adds a bit of weight to OP's original theory. It's not a justification on its own.
I think the business of running a scam or a fraudulent company is quite different to an actual business.
Not OpenAI will fall upward. Sam Altman is not OpenAI, especially after this latest announcement.
The next sentence compares him to the WeWork CEO.
It's not OpenAI is like WeWork. It's the disgraced CEO of OpenAI is like the disgraced CEO of WeWork.
Now? Yes for Kenneth Lay (assuming he was still alive and/or not hiding on a desert island under a new identity if I put on my tin foil hat)... Madoff, probably not.
Edit: Also, yes, it's hard to sweep things under the rug. We don't know the timeline of events, and we're seeing an instance where Altman failed to hide something.
The details are anyone's guess. But if we're engaging in wild speculation, how about this weird coincidence: one day after Xi Jinping and Sam Altman are in the same place, Sam Altman is abruptly fired.
-It's a server issue, meaning someone fucked up their javascript and cached a session key or something. It's a minor thing; could get the specific dev fired in the worst case, and it is embarrassing, but it is solvable.
-it's inherent to how the AI works, and thus it is impossible to share a ChatGPT server with someone else without sooner or later leaking knowledge. It would mean the company cannot scale at all cause they'd need to provide each client their own separate server instance.
If this was something Sam knew and kept it from the board, that'd be fireable. And it'd be catastrophic, cause it'd mean no useable product until a solution is found.
I'd somehow doubt it is something like this, but if we see security issues and private chats that keep leaking, it is a possibility.
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
[0] https://www.reddit.com/r/OpenAI/comments/17xoact/sam_altman_...
[1] take it with a grain of salt
AND ... post the WeWork debacle, Neumann has once again succeeded in raising a massive investment.
If Sam made a deal with MSFT that required board approval they would be mad, but not this mad. The board feels betrayed, and Sam being the secret owner of OpenAI through the foundation checks all the boxes.
I have no doubt that Altman is deeply embedded in the techbro good old boys network to get another job, but that doesn't change the fact his (now previous) employer released a blog post saying he LIED TO THE BOARD about something severe enough that they had to insta-sack him.
The fact they timed the announcement actually implies some planning, which means the reason couldn't be so damaging that they had to fire him immediately after discovering it. (Of course, it's possible that only by coincidence, an unplanned revelation happened at a convenient time to fire him.)
Why would a 2% drop bother him?
No it doesn't. "Not being candid" does not explicitly mean lying. It's like the old tea towel joke where the people at the bottom say "it's shit" and the manager one rung up says "it's manure" and the next one says "it's fertilizer" and by the time it's reached the CEO they're saying "it promotes growth".
No clear transition plan. In what situations world a board fire the ceo from the worlds greatest tech sensation since who knows when, in a matter of hours ?
But that's not what the board said.
Okay...
> (to the positive)
what?
> of responses.
what?
This comment doesn't make any sense. Can you clarify? Please reword it rather than defending the original phrasing - there are so many ambiguities.
[1] ChatGPT "lying is defined as intentionally making a false statement. If you are omitting details but not actually stating anything false, this may not strictly meet the definition of a lie."
There is bound to be a few people who have a soft spot and will give him money again .
Altman and Brockman have yet to share their side of the story.