zlacker

[parent] [thread] 75 comments
1. Emma_G+(OP)[view] [source] 2023-11-20 16:53:23
I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.

The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.

[1] https://openai.com/charter

replies(27): >>mcny+J1 >>DrJaws+j2 >>FartyM+V3 >>wenyua+p5 >>supriy+T6 >>browni+87 >>gsuuon+a7 >>ninepo+A7 >>danger+Y7 >>pauldd+08 >>leetha+L8 >>dayjah+19 >>Sunhol+3a >>blames+va >>ssnist+za >>coreth+Na >>next_x+3b >>barbar+ib >>KRAKRI+Ub >>dfps+0c >>jkapla+cc >>dreamc+Sd >>nvm0n2+Ud >>PKop+Ee >>kashya+2f >>bart_s+Uf >>zoogen+kx1
2. mcny+J1[view] [source] 2023-11-20 16:58:30
>>Emma_G+(OP)
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I have no inside information. I don't know anyone at Open AI. This is all purely speculation.

Now that that's out out the way, here is my guess: money.

These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.

Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.

replies(3): >>Emma_G+Z4 >>jonahr+79 >>ta1243+Ea
3. DrJaws+j2[view] [source] 2023-11-20 17:00:44
>>Emma_G+(OP)
maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.

at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.

4. FartyM+V3[view] [source] 2023-11-20 17:06:54
>>Emma_G+(OP)
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].

If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.

[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...

replies(7): >>appel+y7 >>leetha+i9 >>anon84+2b >>fizx+tb >>grumpl+oE >>averag+6c1 >>dclowd+Yk1
◧◩
5. Emma_G+Z4[view] [source] [discussion] 2023-11-20 17:10:16
>>mcny+J1
Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.

EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...

replies(11): >>lol768+h6 >>jacque+I6 >>chango+R7 >>iLoveO+i8 >>crazyg+k9 >>jbomba+nb >>gdhkgd+Zb >>Araina+Ic >>atisha+vd >>chr1+Yh >>golerg+4q
6. wenyua+p5[view] [source] 2023-11-20 17:11:54
>>Emma_G+(OP)
I guess employees are compensated with PPUs. And at the face value before the saga, it could be like 90% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, every thing happened after Friday is set.
◧◩◪
7. lol768+h6[view] [source] [discussion] 2023-11-20 17:14:42
>>Emma_G+Z4
You only have to look at humanity's history to see that people will make this decision over and over again.
◧◩◪
8. jacque+I6[view] [source] [discussion] 2023-11-20 17:16:02
>>Emma_G+Z4
Why be surprised? This is exactly how it has always been: the rich aim to get even richer and if that brings risks or negative effects for the rest that's A-ok with them.

That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.

replies(1): >>logicc+59
9. supriy+T6[view] [source] 2023-11-20 17:16:49
>>Emma_G+(OP)
Ultimately people care a lot more about their compensation, since that is what pays the bills and puts food on the table.

Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.

10. browni+87[view] [source] 2023-11-20 17:17:29
>>Emma_G+(OP)
Maybe they believe less in the Board as it stands, and Ilya's commitments, than what Sam was pulling off.
11. gsuuon+a7[view] [source] 2023-11-20 17:17:35
>>Emma_G+(OP)
I also noticed they didn't speak much to the mission/charter. I wonder if the new entity under Sam and Greg contains any remnants of the OpenAI charter, like profit-capping? I can't imagine something like "Our primary fiduciary duty is to humanity" making it's way into the language of any Microsoft (or any bigcorp) subsidiary.

I wonder if this is the end of the non-profit/hybrid model?

◧◩
12. appel+y7[view] [source] [discussion] 2023-11-20 17:18:23
>>FartyM+V3
That sounds like a reasonable assessment, FartyMcFarter.
13. ninepo+A7[view] [source] 2023-11-20 17:18:29
>>Emma_G+(OP)
Imagine putting all your energy behind the person who thinks worldcoin is a good idea...
replies(1): >>barryr+xe
◧◩◪
14. chango+R7[view] [source] [discussion] 2023-11-20 17:19:22
>>Emma_G+Z4
Status is a relative thing and openai will pay you much more than all your peers at other companies.
15. danger+Y7[view] [source] 2023-11-20 17:19:37
>>Emma_G+(OP)
> Given that Sam has since joined Microsoft, that seems plausible, on its face.

He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?

It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.

16. pauldd+08[view] [source] 2023-11-20 17:19:40
>>Emma_G+(OP)
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Lots of reasons, or possible reasons:

1. They think Altman is a skilled and competent leader.

2. They think the board is unskilled and incompetent.

3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.

4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)

◧◩◪
17. iLoveO+i8[view] [source] [discussion] 2023-11-20 17:20:40
>>Emma_G+Z4
> If they work at OpenAI they are already among the highest lifetime earners on the planet

Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?

It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.

It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.

18. leetha+L8[view] [source] 2023-11-20 17:22:02
>>Emma_G+(OP)
IMO it's pretty obvious.

Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.

Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.

They all want the same thing as the vast majority of people: lots of money.

19. dayjah+19[view] [source] 2023-11-20 17:22:44
>>Emma_G+(OP)
Start ups thrive by, in part, creating a sense of camaraderie. Sam isn’t just their boss, he’s their leader, he’s one of them, they believe in him.

You go to bat for your mates, and this is what they’re doing for him.

The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.

replies(1): >>throwa+Bb
◧◩◪◨
20. logicc+59[view] [source] [discussion] 2023-11-20 17:22:55
>>jacque+I6
It's a selection bias: they people who weren't so intrinsically motivated to get rich are less likely to end up as wealthy people.
replies(1): >>munifi+Wi
◧◩
21. jonahr+79[view] [source] [discussion] 2023-11-20 17:22:57
>>mcny+J1
I'm not sure I fully buy this, only because how would anyone be absolutely certain that they'd make more with Sam Altman in charge? It feels like a weird thing to speculatively rally behind.

I'd imagine there's some internal political drama going on or something we're missing out on.

replies(2): >>DeIlli+P9 >>lisper+oa
◧◩
22. leetha+i9[view] [source] [discussion] 2023-11-20 17:23:27
>>FartyM+V3
Yep.

What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.

Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.

replies(4): >>jdminh+R9 >>Finbar+ra >>baron8+kb >>return+nc
◧◩◪
23. crazyg+k9[view] [source] [discussion] 2023-11-20 17:23:43
>>Emma_G+Z4
> over the safe development

Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.

Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.

It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.

And relative to startups, Microsoft is far more predictable and less risky in how it manages things.

replies(2): >>ben_w+Ec >>scythe+lf
◧◩◪
24. DeIlli+P9[view] [source] [discussion] 2023-11-20 17:25:20
>>jonahr+79
I fully buy it. Ethics and morals are a few rungs on the ladder beneath compensation for most software engineers. If the board wants to focus more on being a non-profit and safety, and Altman wants to focus more on commercialization and the economics of business, if my priority is money then where my loyalty goes is obvious.
◧◩◪
25. jdminh+R9[view] [source] [discussion] 2023-11-20 17:25:26
>>leetha+i9
Microsoft has full access to code and weights as part of their deal.
replies(3): >>ben_w+5b >>htrp+Eb >>belter+Pb
26. Sunhol+3a[view] [source] 2023-11-20 17:25:53
>>Emma_G+(OP)
Why should they trust the board? As the letter says, "Despite many requests for specific facts for your allegations, you have never provided any written evidence." If Altman took any specific action that violated the charter, the board should be open about it. Simply trying to make money does not violate the charter and is in fact essential to their mission. The GPT Store, cited as the final straw in leaks, is actually far cleaner money than investments from megacorps. Commercializing the product and selling it directly to consumers reduces dependence on Microsoft.
◧◩◪
27. lisper+oa[view] [source] [discussion] 2023-11-20 17:26:42
>>jonahr+79
> how would anyone be absolutely certain that they'd make more with Sam Altman in charge?

Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.

◧◩◪
28. Finbar+ra[view] [source] [discussion] 2023-11-20 17:26:58
>>leetha+i9
Except MSFT does have access to the IP, and MSFT has access to an enormous trove of their own data across their office suite, Bing, etc. It could be a running start rather than a cold start. A fork of OpenAI inside an unapologetic for profit entity, without the shackles of the weird board structure.
29. blames+va[view] [source] 2023-11-20 17:27:14
>>Emma_G+(OP)
It's like the "Open" in OpenAi was always an open and obvious lie and everybody except the nonprofit oriented folks on the board knew that. Everybody but them is here to make money and only used the nonprofit as a temporary vehicle for credibility and investment that has just been shed like a cicada shell.
30. ssnist+za[view] [source] 2023-11-20 17:27:33
>>Emma_G+(OP)
Seems like the board just didn't explain any of this to the staff at all. So of course they are going to take the side that could signal business as usual instead of siding with the people trying to destroy the hottest tech company on the planet (and their jobs/comps) for no apparent reason. If the board said anything at all, the ratio of staff threatening to quit probably won't be this lopsided.
◧◩
31. ta1243+Ea[view] [source] [discussion] 2023-11-20 17:27:48
>>mcny+J1
> These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money

Getting Cochrane vibes from Star Trek there.

> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.

I wonder how history will view Sam Altman

replies(1): >>imjons+5e
32. coreth+Na[view] [source] 2023-11-20 17:28:06
>>Emma_G+(OP)
The masses aren't logical they follow trends until the trends get big enough that it's unwise to not follow.

It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.

Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.

◧◩
33. anon84+2b[view] [source] [discussion] 2023-11-20 17:28:33
>>FartyM+V3
Yeah, "OpenAI employees would actually prefer to make lots of money now" seems like a plausible answer by default.

It's easy to be a true believer in the mission _before_ all the money is on the table...

34. next_x+3b[view] [source] 2023-11-20 17:28:36
>>Emma_G+(OP)
It is probably best to assume that the employees have more and better information than outsiders do. Also, clearly, there is no consensus on safety/alignment, even within OpenAI.

In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.

◧◩◪◨
35. ben_w+5b[view] [source] [discussion] 2023-11-20 17:28:44
>>jdminh+R9
Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.
36. barbar+ib[view] [source] 2023-11-20 17:29:19
>>Emma_G+(OP)
> The core of the narrative thus far

Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available

◧◩◪
37. baron8+kb[view] [source] [discussion] 2023-11-20 17:29:22
>>leetha+i9
Correct. This is all really bad for Microsoft and probably great for Google. Yet, judging by price changes right now, markets don’t seem to understand this.
◧◩◪
38. jbomba+nb[view] [source] [discussion] 2023-11-20 17:29:33
>>Emma_G+Z4
I don't know how much OpenAI pays. But for this reply, I'm going to assume it's in line with what other big players in the industry pay.

I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."

Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?

I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.

◧◩
39. fizx+tb[view] [source] [discussion] 2023-11-20 17:30:01
>>FartyM+V3
My estimate is that a typical staff engineer who'd been at OpenAI for 2+ years could have sold $8 million of stock next month. I'd be pissed too.
replies(1): >>ergoco+eQ
◧◩
40. throwa+Bb[view] [source] [discussion] 2023-11-20 17:30:26
>>dayjah+19
Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.

[0] https://blog.samaltman.com/greg

replies(1): >>dayjah+bc
◧◩◪◨
41. htrp+Eb[view] [source] [discussion] 2023-11-20 17:30:34
>>jdminh+R9
> Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

This unequivocally .... knowing not how to waste a very expensive training run is a great lesson

◧◩◪◨
42. belter+Pb[view] [source] [discussion] 2023-11-20 17:31:14
>>jdminh+R9
Source for your statement?
replies(1): >>jdminh+1e
43. KRAKRI+Ub[view] [source] 2023-11-20 17:31:24
>>Emma_G+(OP)
Most of people building the actual ML systems don't care about existential ML threats outside of lip service and for publishing papers. They joined OpenAI because OpenAI had tons of money and paid well. Now that both are at risk, it's only natural that they start preparing to jump ship.
◧◩◪
44. gdhkgd+Zb[view] [source] [discussion] 2023-11-20 17:31:46
>>Emma_G+Z4
If you were offered a 100% raise and kept current work responsibilities to go work for, say, a tobacco company, would you take the offer? My guess is >90% of people would.

Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.

Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.

45. dfps+0c[view] [source] 2023-11-20 17:31:48
>>Emma_G+(OP)
Might there also be a consideration of peak value of OpenAI? If a bunch of competing similar AIs are entering the market, and if the usecase fantasy is currently being humbled, staff might be thinking of bubble valuation.

Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.

◧◩◪
46. dayjah+bc[view] [source] [discussion] 2023-11-20 17:32:30
>>throwa+Bb
Absolutely! Thanks for pointing out that I missed Greg in my answer.
47. jkapla+cc[view] [source] 2023-11-20 17:32:31
>>Emma_G+(OP)
Probably some combination of: 1. Pressure from Microsoft and their e-team 2. Not actually caring about those stakes 3. A culture of putting growth/money above all
◧◩◪
48. return+nc[view] [source] [discussion] 2023-11-20 17:33:06
>>leetha+i9
This comment is factually incorrect. As part of the deal with OpenAI, Microsoft has access to all of the IP, model weights, etc.
◧◩◪◨
49. ben_w+Ec[view] [source] [discussion] 2023-11-20 17:34:07
>>crazyg+k9
Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking: https://machinelearning.apple.com/research/
◧◩◪
50. Araina+Ic[view] [source] [discussion] 2023-11-20 17:34:21
>>Emma_G+Z4
Focusing on "global earnings" is disingenuous and dismissive.

In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.

How those numbers line up to the rest of the world does not matter.

replies(1): >>Emma_G+hg
◧◩◪
51. atisha+vd[view] [source] [discussion] 2023-11-20 17:37:25
>>Emma_G+Z4
It just makes more sense to build it in an entity with better funding and commercialization. There will be advanced 2-3 AIs and the most humane one doesn't necessarily win out. It is the one that has the most resources, is used and supported by most people and can do a lot. At this point it doesn't seem OpenAI can get that. It seems to be a lose-lose to stay at open AI - you lose the money and the potential to create something impactful and safe.

It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.

52. dreamc+Sd[view] [source] 2023-11-20 17:38:56
>>Emma_G+(OP)
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.

53. nvm0n2+Ud[view] [source] 2023-11-20 17:39:05
>>Emma_G+(OP)
> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe because the alternative is being led by lunatics who think like this:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

to which the only possible reaction is

What

The

Fuck?

That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.

If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.

◧◩◪◨⬒
54. jdminh+1e[view] [source] [discussion] 2023-11-20 17:39:33
>>belter+Pb
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.

◧◩◪
55. imjons+5e[view] [source] [discussion] 2023-11-20 17:39:51
>>ta1243+Ea
There are non-negligible chances that history will be written by Sam Altman and his GPT minions, so he'll probably be viewed favorably.
◧◩
56. barryr+xe[view] [source] [discussion] 2023-11-20 17:41:38
>>ninepo+A7
That's a pretty solid no-confidence vote in the board and their preferred direction.
57. PKop+Ee[view] [source] 2023-11-20 17:41:57
>>Emma_G+(OP)
The workforce prefers the commericialization/acceleration path, not the "muh safetyism" and over-emphasis on moralism of the non-profit contingent.

They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.

The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.

58. kashya+2f[view] [source] 2023-11-20 17:43:17
>>Emma_G+(OP)
(I can't comment on the workforce question, but one thing below on bringing SamA back.)

Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)

That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.

These sort of Silicon Valley top dogs don't think in terms of sustainability.

Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.

[1] >>38312294

[2] https://news.ycombinator.com/edit?id=38350890

◧◩◪◨
59. scythe+lf[view] [source] [discussion] 2023-11-20 17:44:14
>>crazyg+k9
Microsoft? Not a walled garden?

I think it only seems that way because the open-source world has worked much harder to break into that garden. Apple put a .mp4 gate around your music library. Microsoft put a .doc gate around your business correspondence. And that's before we get to the Mono debacle or the EEE paradigm.

Microsoft is a better corporate citizen now because untold legions of keyboard warriors have stayed up nights reverse-engineering and monkeypatching (and sometimes litigating) to break out of their walls than against anyone else. But that history isn't so easily forgotten.

replies(1): >>bongod+6l
60. bart_s+Uf[view] [source] 2023-11-20 17:46:12
>>Emma_G+(OP)
Perhaps because, for all of Silicon Valley and the tech industries platitudes about wanting to make the world a better place, 90% of them are solely interested in the fastest path to wealth.
◧◩◪◨
61. Emma_G+hg[view] [source] [discussion] 2023-11-20 17:47:54
>>Araina+Ic
I disagree.

First, there are strong diminishing returns to well-being from wealth, meaning that moving oneself from the top 0.5% to the top 0.1% of global income earners is a relatively modest benefit. This relationship is well studied by social scientists and psychologists. Compared to the potential stakes of OpenAI's mission, the balance of importance should be clear.

Two, employees don't have to stay at OpenAI forever. They could support OpenAI's existing not-for-profit charter, and use their earning power later on in life to boost their wealth. Being super-rich and supporting OpenAI at this critical juncture are not mutually exclusive.

Three, I will simply say that I find placing excessive weight on one's self-enrichment to be morally questionable. It's a claim on human production and labour which could be given to people without the basic means of life.

replies(1): >>Araina+1D
◧◩◪
62. chr1+Yh[view] [source] [discussion] 2023-11-20 17:53:22
>>Emma_G+Z4
Or maybe they have good reason to believe that all the talk about "safe development" doesn't contribute anything useful to safety, and simply slows down devlopment?
◧◩◪◨⬒
63. munifi+Wi[view] [source] [discussion] 2023-11-20 17:55:56
>>logicc+59
It's a combination of that and the reality that wealth is power and power is relative.

Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.

And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.

replies(1): >>jacque+ko
◧◩◪◨⬒
64. bongod+6l[view] [source] [discussion] 2023-11-20 18:03:42
>>scythe+lf
I can install whatever I'd like on Windows. I can run Linux in a VM. Calling a document format a wall is really reaching. If you don't have a document with a bunch of crazy formatting, the open office products and Google docs can use it just fine. If you are writing a book or some kind of technical document that needs special markup, yeah, Word isn't going to cut it, never has and was never supposed to.
◧◩◪◨⬒⬓
65. jacque+ko[view] [source] [discussion] 2023-11-20 18:15:03
>>munifi+Wi
Politicians are cheap, nobody is outbidding anybody because they most likely want the exact same thing.
◧◩◪
66. golerg+4q[view] [source] [discussion] 2023-11-20 18:20:46
>>Emma_G+Z4
> over the safe development of a potentially humanity-changing technology

May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?

◧◩◪◨⬒
67. Araina+1D[view] [source] [discussion] 2023-11-20 19:07:46
>>Emma_G+hg
Again, no one in California cares that they are "making more than" someone in Vietnam when food and land in CA are orders of magnitude more expensive there.

OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.

Millions now and being able to live off investments is.

◧◩
68. grumpl+oE[view] [source] [discussion] 2023-11-20 19:12:15
>>FartyM+V3
But doesn't Altman joining Microsoft, and them quitting and following, put them back at square 0? MS isn't going to give them millions of dollars each to join them.
replies(1): >>FartyM+LG1
◧◩◪
69. ergoco+eQ[view] [source] [discussion] 2023-11-20 19:57:34
>>fizx+tb
No way it is this much.
◧◩
70. averag+6c1[view] [source] [discussion] 2023-11-20 21:21:18
>>FartyM+V3
Surely they're already extremely rich? I'd imagine working for a 700 person company leading the world in AI pays very well.
replies(1): >>maxlam+lo1
◧◩
71. dclowd+Yk1[view] [source] [discussion] 2023-11-20 22:02:30
>>FartyM+V3
Ugh, I’m never been more disenchanted with a group of people in my life before. Not only are they comfortable with writing millions of jobs out of existence, but also taking a fat paycheck to do it. At least with the “non-profit” mission keystone, we had some plausible deniability that greed rules all, but of fucking course it does.

All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.

◧◩◪
72. maxlam+lo1[view] [source] [discussion] 2023-11-20 22:23:11
>>averag+6c1
Only rich in stocks. Salaries are high for sure but probably not enough to be rich by Bay Area standards
replies(1): >>averag+Jj5
73. zoogen+kx1[view] [source] 2023-11-20 23:12:29
>>Emma_G+(OP)
I believe it is hard to understand these kind of movements because there isn't one reason. As has been mentioned, it may be money for some. For others it may be anger over what they feel was the board mishandling the situation and precipitating this mess. For others it may be loyalty. For others peer pressure. etc.

This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.

Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?

This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.

◧◩◪
74. FartyM+LG1[view] [source] [discussion] 2023-11-21 00:08:55
>>grumpl+oE
That's why they'd rather Altman rejoins OpenAI as mentioned.
replies(1): >>kyle_g+dZ5
◧◩◪◨
75. averag+Jj5[view] [source] [discussion] 2023-11-21 21:53:29
>>maxlam+lo1
Sure, but by pretty much any other standard? Over $170k USD puts you in the top 10% income earners globally. If you work at this wage point for 3-5 years and then move somewhere (almost anywhere globally or in the US), you can afford a comfortable life and probably work 2-3 days a week for decades if you choose.

This is nothing but greed.

◧◩◪◨
76. kyle_g+dZ5[view] [source] [discussion] 2023-11-22 01:32:31
>>FartyM+LG1
The behavior of various actors in this saga indeed seems to indicate 'Altman and OpenAI employees back at OpenAI' as the preferred option by those actors over 'Altman and OpenAI employees join Microsoft in masse'.
[go to top]