They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.
They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.
Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.
Seriously, I’m asking. Like… if you were an engineer that worked on UNIX System V at AT&T/Bell Labs and contributed code to the BSDs from memory alone, would you really be liable?
I am not dismissing the possibility, far from it. It sounds very plausible. But are there any credible reports to back it up?
So unless any of the necessary bits are patented, I highly doubt an argument against them starting a new company will hold in the courts.
Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.
Also - the wanting to get himself fired intentionally argument doesn’t have any substance. What will he gain from that? If anything, him leaving on his own terms sounds like a much stronger argument. I don’t buy the getting-fired-and-having-no-choice-but-to-start-an-AGI-company argument.
An interesting twist would be if he joins Elon in his pursuit. Pure speculation, sharing it just for amusement. I don’t think they’ll ever work together. Can’t have two people calling the shots at the top. Leaves employees confused and rarely ever works. Probably not very good for their own mental health either.
It's just a fun theory, which I think is plausible. It's based on my personal view of how Sam Altman operates, i.e. very smart, very calculative, makes big gambles for the "greater purpose".
It's very difficult to enforce anything like this in California. They can pay him to not work, but can't just require it.
That is not how IP law works. Even writing new code based on the IP developed at OpenAI would be IP theft.
None of this really makes sense when you consider that Ilya Sutskever, arguably the single most important person at OpenAI, appears to have been a part of removing Sam.
Someone else can probably say it better than I can, but that's how I understand it at this moment.
https://www.theverge.com/2017/10/19/16503076/oracle-vs-googl...
The source of the speculation could further enhance or remove the probability of this being true. For instance, a journalist who covers OpenAI vs. a random tweeter (now X’er?) with no direct connection. It’s a loose application of Bayesian reasoning - where knowing the likelihood of one event (occupation of speculator and their connection to AI) can significantly increase the probability of the other event (the speculation).