I have no inside information. I don't know anyone at Open AI. This is all purely speculation.
Now that that's out out the way, here is my guess: money.
These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.
Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.
EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...
That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.
Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?
It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.
It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.
I'd imagine there's some internal political drama going on or something we're missing out on.
Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.
Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.
It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.
And relative to startups, Microsoft is far more predictable and less risky in how it manages things.
Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.
Getting Cochrane vibes from Star Trek there.
> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.
I wonder how history will view Sam Altman
I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."
Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?
I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.
Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.
Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.
In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.
How those numbers line up to the rest of the world does not matter.
It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.
I think it only seems that way because the open-source world has worked much harder to break into that garden. Apple put a .mp4 gate around your music library. Microsoft put a .doc gate around your business correspondence. And that's before we get to the Mono debacle or the EEE paradigm.
Microsoft is a better corporate citizen now because untold legions of keyboard warriors have stayed up nights reverse-engineering and monkeypatching (and sometimes litigating) to break out of their walls than against anyone else. But that history isn't so easily forgotten.
First, there are strong diminishing returns to well-being from wealth, meaning that moving oneself from the top 0.5% to the top 0.1% of global income earners is a relatively modest benefit. This relationship is well studied by social scientists and psychologists. Compared to the potential stakes of OpenAI's mission, the balance of importance should be clear.
Two, employees don't have to stay at OpenAI forever. They could support OpenAI's existing not-for-profit charter, and use their earning power later on in life to boost their wealth. Being super-rich and supporting OpenAI at this critical juncture are not mutually exclusive.
Three, I will simply say that I find placing excessive weight on one's self-enrichment to be morally questionable. It's a claim on human production and labour which could be given to people without the basic means of life.
Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.
And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.
May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?
OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.
Millions now and being able to live off investments is.