The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.
I have no inside information. I don't know anyone at Open AI. This is all purely speculation.
Now that that's out out the way, here is my guess: money.
These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.
Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.
at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.
Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].
If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.
[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...
EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...
That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.
Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.
I wonder if this is the end of the non-profit/hybrid model?
He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?
It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.
Lots of reasons, or possible reasons:
1. They think Altman is a skilled and competent leader.
2. They think the board is unskilled and incompetent.
3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.
4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)
Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?
It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.
It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.
Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.
Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.
They all want the same thing as the vast majority of people: lots of money.
You go to bat for your mates, and this is what they’re doing for him.
The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.
I'd imagine there's some internal political drama going on or something we're missing out on.
What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.
Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.
Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.
Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.
It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.
And relative to startups, Microsoft is far more predictable and less risky in how it manages things.
Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.
Getting Cochrane vibes from Star Trek there.
> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.
I wonder how history will view Sam Altman
It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.
Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.
It's easy to be a true believer in the mission _before_ all the money is on the table...
In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.
Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available
I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."
Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?
I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.
This unequivocally .... knowing not how to waste a very expensive training run is a great lesson
Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.
Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.
Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.
In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.
How those numbers line up to the rest of the world does not matter.
It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.
I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.
Maybe because the alternative is being led by lunatics who think like this:
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
to which the only possible reaction is
What
The
Fuck?
That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.
If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.
> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.
They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.
The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.
Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)
That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.
These sort of Silicon Valley top dogs don't think in terms of sustainability.
Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.
[1] >>38312294
I think it only seems that way because the open-source world has worked much harder to break into that garden. Apple put a .mp4 gate around your music library. Microsoft put a .doc gate around your business correspondence. And that's before we get to the Mono debacle or the EEE paradigm.
Microsoft is a better corporate citizen now because untold legions of keyboard warriors have stayed up nights reverse-engineering and monkeypatching (and sometimes litigating) to break out of their walls than against anyone else. But that history isn't so easily forgotten.
First, there are strong diminishing returns to well-being from wealth, meaning that moving oneself from the top 0.5% to the top 0.1% of global income earners is a relatively modest benefit. This relationship is well studied by social scientists and psychologists. Compared to the potential stakes of OpenAI's mission, the balance of importance should be clear.
Two, employees don't have to stay at OpenAI forever. They could support OpenAI's existing not-for-profit charter, and use their earning power later on in life to boost their wealth. Being super-rich and supporting OpenAI at this critical juncture are not mutually exclusive.
Three, I will simply say that I find placing excessive weight on one's self-enrichment to be morally questionable. It's a claim on human production and labour which could be given to people without the basic means of life.
Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.
And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.
May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?
OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.
Millions now and being able to live off investments is.
All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.
This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.
Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?
This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.
This is nothing but greed.