Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?
Or, if I wanted to speculate to the extremely negative; what if the training and operating costs ballooned to such a degree, that the deal with Microsoft was an attempt to plug the cash hole without having to go to the board requesting an enormous loan? Because the fact that Copilot (edit: previously known as Bing Chat and Bing Image Creator) is free and ChatGPT (edit: and DALL-E 3) are not should be a red flag...
Of sorts.
ChatGPT is actually a farm of underpaid humans, located somewhere in southeast Asia.
and
"no longer has confidence"
points to something more serious than underestimating costs.
Given the sudden shift in billing terms that is quite possible.
I'd assume that running a model that only needs to deal with a single programming language (the Copilot plugin knows what kind of code base it is working on) is _a lot_ cheaper than running the "full" ChatGPT 4.
"not significantly candid in projections for profitability"
"not significantly candid in calculating operation cost increases"
"not significantly candid in how much subscribers are actually using ChatGPT"
etc.
This is what shouldn't add up: Microsoft is literally adding GPT-4, for free, to the Windows 11 taskbar. Can you imagine how much that costs when you look at the GPT-4 API, or ChatGPT's subscription price? Either Microsoft is burning money, or OpenAI agreed to burn money with them. But why would they do that, when that would compromise $20/mo. subscription sales?
Something doesn't financially add up there.
Unless there was evidence you had not underestimated but were, e.g., getting a kickback on the cloud costs that you deliverately lowballed in your estimates, they might fire you, but they almost certainly wouldn't put out a press release about it being for your failure to be candid.
That language indicates that the board has a strong belief that there was a major lie to the board or an ongoing pattern of systematic misrepresentation, or a combination.
I'm trying to read the tea leaves and there seem to be quite a few reminders that OpenAI is a non-profit, it's supposed to further the goals of all humanity (despite its great financial success), it's controlled by a board that largely doesn't have a financial interest in the company, etc etc.
Maybe Altman has been straying a bit far from those supposed ideals, and has been trying to use OpenAI to enrich himself personally in a way that would look bad should it be revealed (hence this messaging to get in front of it).
I found a tree trial and $10/month $100/year after that. I've asked them to consider a free tier for hobbyists that cannot justify the expense but I'm not holding my breath.
If there is a free tier I did not find, please point me to it!
I'd tend to agree, but "deliberative process" doesn't really fit with this. Sounds like it might have been building for ~weeks or more?
Maybe Sam had been trying to broker a sale of the company without consulting the board first? All speculation until more details are revealed but he must've done something of similar magnitude.
Whether they ultimately wanted to profit from it or not, there is $trillions of value in AI that can only be unlocked if you trust your AI provider to secure the data you transmit to it. Every conversation I’ve had about OpenAI has revolved around this question of fundamental trust.
Of course we have no clue if that's what actually happened. Any conclusions made at this point are complete speculation, and we can't make any conclusions more specific then "this is probably bad news."
I think the problem there is that the original CTO is now the interim CEO and they are on the board. So while that kind of scenario could make sense, it's a little hard to picture how the CTO would not know about something like that, and if they did you'd presumably not make them CEO afterward.
Though I can't say that the training data wasn't obtained by nefarious means...
To be clear, this is only one possible explanation for Altman's firing. And for my money, I don't even think it's the most likely explanation. But right now, those who rely on OpenAI products should prepare for the worst, and this is one of the most existentially threatening possibilities.
Thinking you can keep it "locked up" would be beyond naive.
Based on future potential. Investors dont know how high will OpenAI go but they know that is going to go high.
Microsoft is investing billions into OpenAI, and much of it is in the form of cloud services. I doubt there was a surprise bill for that sort of thing. But if there was, and Altman is the one who ordered it, I could see the board reacting in a similar way.
They have proof he for sure lied, but not that he molested his sister growing up.
If it was a different situation and he lied, but they had no proof, you’re correct, then no statement.
Explains a lot.
If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.
If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.