zlacker

[parent] [thread] 214 comments
1. polite+(OP)[view] [source] 2023-11-22 08:19:38
> there's clearly little critical thinking amongst OpenAI's employees either.

That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.

replies(13): >>dimask+x >>hutzli+02 >>highwa+72 >>lwhi+f4 >>kissgy+v5 >>Satam+A5 >>murbar+hc >>kitsun+ve >>yodsan+el >>achron+Wl >>__loam+Ir >>JCM9+yw >>kiba+3A
2. dimask+x[view] [source] 2023-11-22 08:24:11
>>polite+(OP)
It is not about different set of information, but different stakes/interests. They act firstmost as investors rather than as employees on this.
replies(3): >>siva7+G1 >>karmas+32 >>Wytwww+sg
◧◩
3. siva7+G1[view] [source] [discussion] 2023-11-22 08:33:19
>>dimask+x
A board member, Helen Toner, made a borderline narcissistic remark that it would be consistent with the company mission to destroy the company when the leadership confronted the board that their decisions puts the future of the company in danger. Almost all employees resigned in protest. It's insulting calling the employees under these circumstances investors.
replies(4): >>outsom+O3 >>stingr+c4 >>ah765+y8 >>Ludwig+5n
4. hutzli+02[view] [source] 2023-11-22 08:35:32
>>polite+(OP)
"They have a different set of information than you do,"

Their bank accounts current and potential future numbers?

replies(1): >>tucnak+i3
◧◩
5. karmas+32[view] [source] [discussion] 2023-11-22 08:35:56
>>dimask+x
Tell me how the board's actions could convince the employees they are making the right move?

Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.

OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.

replies(2): >>kortil+t8 >>cyanyd+op
6. highwa+72[view] [source] 2023-11-22 08:36:07
>>polite+(OP)
There’s evidence to suggest that a central group have pressured the broader base of employees into going along with this, as posted elsewhere in the thread.
◧◩
7. tucnak+i3[view] [source] [discussion] 2023-11-22 08:44:04
>>hutzli+02
How is employees protecting themselves is suddenly a bad thing? There's no idiots at OpenAI.
replies(2): >>g-b-r+K6 >>pooya1+ot
◧◩◪
8. outsom+O3[view] [source] [discussion] 2023-11-22 08:49:09
>>siva7+G1
> Almost all employees resigned in protest.

That never happened, right?

replies(1): >>ldjb+J7
◧◩◪
9. stingr+c4[view] [source] [discussion] 2023-11-22 08:52:07
>>siva7+G1
Don’t forget she’s heavily invested in a company that is directly competing with OpenAI. So obviously it’s also in her best interest to see OpenAI destroyed.
replies(4): >>lodovi+5a >>muraka+Zh >>doktri+mk >>Philpa+Bl
10. lwhi+f4[view] [source] 2023-11-22 08:52:17
>>polite+(OP)
I think it's fair to call this reactionary; Sam Altman has played the part of 'ping-pong ball' exceptionally well these past few days.
11. kissgy+v5[view] [source] 2023-11-22 09:02:17
>>polite+(OP)
The available public information is enough to reach this conclusion.
12. Satam+A5[view] [source] 2023-11-22 09:02:58
>>polite+(OP)
I'm sure most of them are extremely intelligent but the situation showed they are easily persuaded, even if principled. They will have to overcome many first-of-a-kind challenges on their quest to AGI but look at how quickly everyone got pulled into a feel-good kumbaya sing-along.

Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.

replies(6): >>wiz21c+16 >>abm53+86 >>ah765+97 >>giggle+W7 >>ssnist+p9 >>gexla+ym
◧◩
13. wiz21c+16[view] [source] [discussion] 2023-11-22 09:08:57
>>Satam+A5
> feel-good kumbaya sing-along

learning english over HN is so fun !

◧◩
14. abm53+86[view] [source] [discussion] 2023-11-22 09:09:43
>>Satam+A5
I think another factor is that they had very limited time. It was clear they needed to pick a side and build momentum quickly.

They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.

replies(1): >>Satam+X7
◧◩◪
15. g-b-r+K6[view] [source] [discussion] 2023-11-22 09:15:14
>>tucnak+i3
They were supposed to have higher values than money
replies(4): >>lovely+j8 >>plasma+F9 >>logicc+kj >>Zpalmt+091
◧◩
16. ah765+97[view] [source] [discussion] 2023-11-22 09:19:44
>>Satam+A5
I thought so originally too, but when I thought about their perspective, I realized I would probably sign too. Imagine that your CEO and leadership has led your company to the top of the world, and you're about to get a big payday. Suddenly, without any real explanation, the board kicks out the CEO. The leadership almost all supports the CEO and signs the pledge, including your manager. What would you do at that point? Personally, I'd sign just so I didn't stand out, and stay on good terms with leadership.

The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.

replies(1): >>Satam+Pa
◧◩◪◨
17. ldjb+J7[view] [source] [discussion] 2023-11-22 09:24:01
>>outsom+O3
Almost all employees did not resign in protest, but they did _threaten_ to resign.

https://www.theverge.com/2023/11/20/23968988/openai-employee...

◧◩
18. giggle+W7[view] [source] [discussion] 2023-11-22 09:26:05
>>Satam+A5
> situation showed they are “easily persuaded”

How do you know?

> look at how “quickly” everyone got pulled into

Again, how do you know?

◧◩◪
19. Satam+X7[view] [source] [discussion] 2023-11-22 09:26:12
>>abm53+86
Great point. Either way, when this all started it might have all been too late.

The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.

◧◩◪◨
20. lovely+j8[view] [source] [discussion] 2023-11-22 09:28:54
>>g-b-r+K6
>They were supposed to have higher values than money

which are? …

replies(3): >>kortil+w8 >>jampek+Cc >>brazzy+Ld
◧◩◪
21. kortil+t8[view] [source] [discussion] 2023-11-22 09:29:31
>>karmas+32
> OpenAI has some of the smartest human beings on this planet

Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.

replies(4): >>alsodu+1a >>Wytwww+8g >>mrangl+7z >>rewmie+xB
◧◩◪◨⬒
22. kortil+w8[view] [source] [discussion] 2023-11-22 09:30:15
>>lovely+j8
Ethics presumably
◧◩◪
23. ah765+y8[view] [source] [discussion] 2023-11-22 09:30:24
>>siva7+G1
It is a correct statement, not really "borderline narcissistic". The board's mission is to help humanity develop safe beneficial AGI. If the board thinks that the company is hindering this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.

replies(2): >>siva7+Wc >>qwytw+Sg
◧◩
24. ssnist+p9[view] [source] [discussion] 2023-11-22 09:36:56
>>Satam+A5
Persuaded by whom? This whole saga has been opaque to pretty much everyone outside the handful of individuals directly negotiating with each other. This never was about a battle for OpenAI's mission or else the share of employees siding with Sam wouldn't have been that high.
replies(1): >>Ludwig+zn
◧◩◪◨
25. plasma+F9[view] [source] [discussion] 2023-11-22 09:39:15
>>g-b-r+K6
I don't understand how, with the dearth of information we currently have, anyone can see this as "higher values" vs "money".

No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.

replies(1): >>Ajedi3+d41
◧◩◪◨
26. alsodu+1a[view] [source] [discussion] 2023-11-22 09:42:11
>>kortil+t8
I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.

Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.

replies(3): >>_djo_+fb >>TheOth+wc >>mrangl+vz
◧◩◪◨
27. lodovi+5a[view] [source] [discussion] 2023-11-22 09:42:32
>>stingr+c4
She probably wants both companies to be successful. Board members are not super villains.
replies(1): >>siva7+ni
◧◩◪
28. Satam+Pa[view] [source] [discussion] 2023-11-22 09:48:13
>>ah765+97
The point of no return for the company might have been crossed way before the employees were forced to choose sides. Choose Sam's side and the company lives but only as a bittersweet reminder of its founding principles. Choose the board's side and you might be dooming the company to die an even faster death.

But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.

replies(1): >>vinay_+2j
◧◩◪◨⬒
29. _djo_+fb[view] [source] [discussion] 2023-11-22 09:53:32
>>alsodu+1a
I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.
replies(1): >>karmas+Ab
◧◩◪◨⬒⬓
30. karmas+Ab[view] [source] [discussion] 2023-11-22 09:57:54
>>_djo_+fb
I don't think critical thinking can be defined as joining the minority party.
replies(3): >>Frustr+uo >>_djo_+vv >>kortil+qga
31. murbar+hc[view] [source] 2023-11-22 10:02:49
>>polite+(OP)
If 95% of people voted in favour of apple pie, I'd become a bit suspicious of apple pie.
replies(3): >>achron+rm >>eddtri+ip >>iowemo+qv
◧◩◪◨⬒
32. TheOth+wc[view] [source] [discussion] 2023-11-22 10:04:48
>>alsodu+1a
Smart is not a one dimensional variable. And critical thinking != corporate politics.

Stupidity is defined by self-harming actions and beliefs, not by low IQ.

You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.

replies(3): >>op00to+we >>brigan+Og >>ameist+ew
◧◩◪◨⬒
33. jampek+Cc[view] [source] [discussion] 2023-11-22 10:05:26
>>lovely+j8
Perhaps something like "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."
◧◩◪◨
34. siva7+Wc[view] [source] [discussion] 2023-11-22 10:08:37
>>ah765+y8
I can't interpret from the charter that the board has the authorisation to destroy the company under the current circumstances:

> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project

That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.

replies(1): >>ah765+Ae
◧◩◪◨⬒
35. brazzy+Ld[view] [source] [discussion] 2023-11-22 10:13:05
>>lovely+j8
https://openai.com/charter
36. kitsun+ve[view] [source] 2023-11-22 10:19:53
>>polite+(OP)
OpenAI Inc.'s mission in their filings:

"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."

replies(7): >>grafta+Yf >>vaxman+fo >>coldte+ho >>bottle+Bp >>rvba+Fp >>mrangl+yy >>blitza+sA
◧◩◪◨⬒⬓
37. op00to+we[view] [source] [discussion] 2023-11-22 10:20:03
>>TheOth+wc
Stupidity is defined as “having or showing a great lack of intelligence or common sense”. You can be extremely smart and still make up your own definitions for words.
◧◩◪◨⬒
38. ah765+Ae[view] [source] [discussion] 2023-11-22 10:20:34
>>siva7+Wc
The more relevant part is probably "OpenAI’s mission is to ensure that AGI ... benefits all of humanity".

The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.

A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.

◧◩
39. grafta+Yf[view] [source] [discussion] 2023-11-22 10:35:58
>>kitsun+ve
People got burned on “don’t be evil” once and so far OpenAI’s vision looks like a bunch of marketing superlatives when compared to their track record.
replies(3): >>phero_+Cn >>nmfish+eq >>Cheeze+fl1
◧◩◪◨
40. Wytwww+8g[view] [source] [discussion] 2023-11-22 10:38:05
>>kortil+t8
> not mean you are good at critical thinking or thinking about strategic corporate politics

Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.

Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.

◧◩
41. Wytwww+sg[view] [source] [discussion] 2023-11-22 10:40:38
>>dimask+x
> They act firstmost as investors rather than as employees on this. reply

That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).

◧◩◪◨⬒⬓
42. brigan+Og[view] [source] [discussion] 2023-11-22 10:43:53
>>TheOth+wc
I agree. It's better to separate intellect from intelligence instead of conflating them as they usually are. The latter is about making good decisions, which intellect can help with but isn't the only factor. We know this because there are plenty of examples of people who aren't considered shining intellects who can make good choices (certainly in particular contexts) and plenty of high IQ people who make questionable choices.
replies(1): >>august+ir
◧◩◪◨
43. qwytw+Sg[view] [source] [discussion] 2023-11-22 10:44:04
>>ah765+y8
> this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...

◧◩◪◨
44. muraka+Zh[view] [source] [discussion] 2023-11-22 10:53:49
>>stingr+c4
Wait what? She invested in a competitor? Do you have a source?
replies(1): >>ottero+Bn
◧◩◪◨⬒
45. siva7+ni[view] [source] [discussion] 2023-11-22 10:56:09
>>lodovi+5a
I agree that we should usually assume good faith. Still, if a member knows she will loose the board seat soon and makes such a implicit statement to the leadership team there is reason to believe that she doesn't want both companies to be successful, at least one of those not.
◧◩◪◨
46. vinay_+2j[view] [source] [discussion] 2023-11-22 11:03:21
>>Satam+Pa
What happened over the weekend is a death and rebirth, of the board and the leaderships structure which will definitely ripple throughout the company in the coming days. It just doesn't align perfectly with how you want it to happen.
◧◩◪◨
47. logicc+kj[view] [source] [discussion] 2023-11-22 11:05:34
>>g-b-r+K6
"higher values" like trying to stop computers from saying the n-word?
replies(1): >>hutzli+2k
◧◩◪◨⬒
48. hutzli+2k[view] [source] [discussion] 2023-11-22 11:13:15
>>logicc+kj
For some that is important, but more people consider the prevention of an AI monopoly to be more important here. See the original charter and the status quo with Microsoft taking it all.
◧◩◪◨
49. doktri+mk[view] [source] [discussion] 2023-11-22 11:15:54
>>stingr+c4
> obviously it’s also in her best interest to see OpenAI destroyed

Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?

These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.

replies(1): >>svnt+Um
50. yodsan+el[view] [source] 2023-11-22 11:24:27
>>polite+(OP)
> different set of information

and different incentives.

◧◩◪◨
51. Philpa+Bl[view] [source] [discussion] 2023-11-22 11:26:58
>>stingr+c4
Uhhh, are you sure about that? She wrote a paper that praised Anthropic’s approach to safety, but as far as I’m aware she’s not invested in them.

Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?

52. achron+Wl[view] [source] 2023-11-22 11:29:14
>>polite+(OP)
No, if they had vastly different information, and if it was on the right side of their own stated purpose & values, they would have behaved very differently. This kind of equivocation hinders the way more important questions such as: just what the heck is Larry Summers doing on that board?
replies(9): >>vasco+6n >>dontup+Io >>cyanyd+6p >>hobofa+dp >>shmatt+6q >>383210+Br >>T-A+8w >>mrangl+hw >>Burnin+jE
◧◩
53. achron+rm[view] [source] [discussion] 2023-11-22 11:33:39
>>murbar+hc
Or you'd want to thoroughly investigate this so-called voting.

Or that said apple pie was essential to their survival.

◧◩
54. gexla+ym[view] [source] [discussion] 2023-11-22 11:34:37
>>Satam+A5
My understanding is that the non-profit created the for-profit so that they could offer compensation which would be typical for SV start-ups. Then the board essentially broke the for-profit by removing the SV CEO and putting the "payday" which would have valued the company at 80 billion in jeopardy. The two sides weren't aligned, and they need to decide which company they want to be. Maybe they should have removed Sam before MS came in with their big investment. Or maybe they want to have their cake and eat it too.
◧◩◪◨⬒
55. svnt+Um[view] [source] [discussion] 2023-11-22 11:38:03
>>doktri+mk
How does Netflix compete with Facebook?

This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.

https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...

Oracle is going to get into EVs?

You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.

replies(1): >>doktri+hp
◧◩◪
56. Ludwig+5n[view] [source] [discussion] 2023-11-22 11:39:53
>>siva7+G1
The only OpenAI employees who resigned in protest are the employees that were against Sam Altman. That’s how Anthropic appeared.
replies(1): >>sander+so
◧◩
57. vasco+6n[view] [source] [discussion] 2023-11-22 11:39:53
>>achron+Wl
I think this is a good question. One should look at what actually happened in practice. What was the previous board, what is the current board. For the leadership team, what are the changes? Additionally, was information revealed about who calls the shots which can inform who will drive future decisions? Anything else about the inbetweens to me is smoke and mirrors.
◧◩◪
58. Ludwig+zn[view] [source] [discussion] 2023-11-22 11:44:50
>>ssnist+p9
Why not? Maybe the board was just too late to the party. Maybe the employees that wouldn’t side with Sam have already left[1], and the board was just too late to realise that. And maybe all the employees who are still at OpenAI mostly care about their equity-like instruments.

[1] https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e

◧◩◪◨⬒
59. ottero+Bn[view] [source] [discussion] 2023-11-22 11:45:14
>>muraka+Zh
One source might be DuckDuckGo. It's a privacy-focused alternative to Google, which is great when researching "unusual" topics.
replies(3): >>muraka+wo >>dontup+np >>free65+Ws
◧◩◪
60. phero_+Cn[view] [source] [discussion] 2023-11-22 11:45:19
>>grafta+Yf
At this point I tend to believe that big company slogans mean the opposite of what the words say.

Like I would become immediately suspicious if food packaging had “real food” written on it.

replies(1): >>timacl+4F
◧◩
61. vaxman+fo[view] [source] [discussion] 2023-11-22 11:49:59
>>kitsun+ve
It could be hard to do that while paying a penalty to FTB and IRS for what they’re suspected to have done (in allowing a for-profit subsidiary to influence an NPO parent) or dealing with SEC and the state courts over any fiduciary breach allegations related to the published stories. [ Nadella is an OG genius because his company is now shielded from all of that drama as it plays out, no matter the outcome. He can take the time to plan for a soft landing at MS for any OpenAI workers (if/when they need it) and/or to begin duplicating their efforts “just in case.” Heard coming from the HQ parking lot in Redmond https://youtu.be/GGXzlRoNtHU ]

Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/

replies(1): >>erosen+GT
◧◩
62. coldte+ho[view] [source] [discussion] 2023-11-22 11:50:11
>>kitsun+ve
Those mission statements are a dime a dozen. A junkie's promise has more value.
replies(1): >>im3w1l+j51
◧◩◪◨
63. sander+so[view] [source] [discussion] 2023-11-22 11:51:38
>>Ludwig+5n
And it seems like they were right that the for-profit part of the company had become out of control, in the literal sense that we've seen through this episode that it could not be controlled.
replies(1): >>cyanyd+Ep
◧◩◪◨⬒⬓⬔
64. Frustr+uo[view] [source] [discussion] 2023-11-22 11:51:51
>>karmas+Ab
Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".
replies(2): >>golden+yq >>belter+Hs
◧◩◪◨⬒⬓
65. muraka+wo[view] [source] [discussion] 2023-11-22 11:51:59
>>ottero+Bn
I couldn't find any source on her investing in any AI companies. If it's true (and not hidden), I'm really surprised that major news publications aren't covering it.
◧◩
66. dontup+Io[view] [source] [discussion] 2023-11-22 11:53:51
>>achron+Wl
>just what the heck is Larry Summers doing on that board?

1. Did you really think the feds wouldn't be involved?

AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.

2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.

The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public

replies(2): >>chucke+it >>jddj+qt
◧◩
67. cyanyd+6p[view] [source] [discussion] 2023-11-22 11:56:35
>>achron+Wl
I assume larry summers is there to ensure the proper bi-partisan choices made by whats clearly now an _business_ product and not a product for humanity.

Which is utterly scary.

◧◩
68. hobofa+dp[view] [source] [discussion] 2023-11-22 11:57:05
>>achron+Wl
> of their own stated purpose & values

You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.

From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".

replies(2): >>Wander+Fr >>bnralt+ex
◧◩◪◨⬒⬓
69. doktri+hp[view] [source] [discussion] 2023-11-22 11:57:40
>>svnt+Um
> How does Netflix compete with Facebook?

By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.

But you don’t have to just take my word for it :

> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”

https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...

> This is what happened with Eric Schmidt on Apple’s board

Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.

replies(2): >>Jumpin+Er >>svnt+3t1
◧◩
70. eddtri+ip[view] [source] [discussion] 2023-11-22 11:57:52
>>murbar+hc
I think it makes sense

Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.

There’s no perks to not signing.

replies(1): >>_heimd+kt
◧◩◪◨⬒⬓
71. dontup+np[view] [source] [discussion] 2023-11-22 11:58:17
>>ottero+Bn
>which is great when researching "unusual" topics.

Yandex is for Porn. What is DDG for?

◧◩◪
72. cyanyd+op[view] [source] [discussion] 2023-11-22 11:58:18
>>karmas+32
oh gosh, no, no no no.

Doing AI for ChatGPT just means you know a single model really well.

Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.

It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.

◧◩
73. bottle+Bp[view] [source] [discussion] 2023-11-22 11:59:28
>>kitsun+ve
If that were true they’d be a not-for-profit
◧◩◪◨⬒
74. cyanyd+Ep[view] [source] [discussion] 2023-11-22 11:59:33
>>sander+so
Ands the evidence is now that OpenAI is a business 2 business product and not a attempt to keep AI doing anything but satiating anything Microsoft wants.
◧◩
75. rvba+Fp[view] [source] [discussion] 2023-11-22 11:59:40
>>kitsun+ve
Most employees of any organization dont give a fuck about the vision or mission (often they dont even know it) - and are there just for the money.
replies(2): >>j_maff+5w >>Doughn+Vz
◧◩
76. shmatt+6q[view] [source] [discussion] 2023-11-22 12:02:57
>>achron+Wl
He’s a white male replacing a female board member. Which is probably what they wanted all along
replies(1): >>dbspin+zq
◧◩◪
77. nmfish+eq[view] [source] [discussion] 2023-11-22 12:03:37
>>grafta+Yf
At least Google lasted a good 10 years or so before succumbing to the vagaries of the public stock market. OpenAI lasted, what, 3 years?

Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.

replies(3): >>roland+Dv >>bad_us+jy >>deckar+Az1
◧◩◪◨⬒⬓⬔⧯
78. golden+yq[view] [source] [discussion] 2023-11-22 12:05:45
>>Frustr+uo
Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
replies(3): >>WJW+yr >>suodua+ED >>Zpalmt+z81
◧◩◪
79. dbspin+zq[view] [source] [discussion] 2023-11-22 12:05:52
>>shmatt+6q
Yes, the patriarchy collectively breathed a sigh of relief as one of our agents was inserted to prevent any threat from the other side.
◧◩◪◨⬒⬓⬔
80. august+ir[view] [source] [discussion] 2023-11-22 12:11:54
>>brigan+Og
https://liamchingliu.wordpress.com/2012/06/25/intellectuals-...
◧◩◪◨⬒⬓⬔⧯▣
81. WJW+yr[view] [source] [discussion] 2023-11-22 12:14:18
>>golden+yq
I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.

You seem to be equating AI with magic, which it is very much not.

replies(1): >>golden+dX
◧◩
82. 383210+Br[view] [source] [discussion] 2023-11-22 12:14:31
>>achron+Wl
> just what the heck is Larry Summers doing on that board?

Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.

https://www.wired.com/2014/04/dropbox-rice-controversy/

https://en.wikipedia.org/wiki/Theranos#Management

In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m

“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)

https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...

◧◩◪◨⬒⬓⬔
83. Jumpin+Er[view] [source] [discussion] 2023-11-22 12:14:43
>>doktri+hp
> > By definition the attention economy dictates that time spent one place can’t be spent in another

Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.

If you want to use that definition you might want to also add a criteria for minimum size of the company.

replies(1): >>doktri+su
◧◩◪
84. Wander+Fr[view] [source] [discussion] 2023-11-22 12:14:53
>>hobofa+dp
“never” is a strong word. I believe in the RL era of OpenAI they were quite aligned with the mission/values
85. __loam+Ir[view] [source] 2023-11-22 12:15:15
>>polite+(OP)
They have a different set of incentives. If I were them I would have done the same thing, Altman is going to make them all fucking rich. Not sure if that will benefit humanity though.
◧◩◪◨⬒⬓⬔⧯
86. belter+Hs[view] [source] [discussion] 2023-11-22 12:21:30
>>Frustr+uo
That is 3D Chess. 5D Chess says those mil will be worthless when the AGI takes over...
replies(1): >>kaibee+cJ
◧◩◪◨⬒⬓
87. free65+Ws[view] [source] [discussion] 2023-11-22 12:23:08
>>ottero+Bn
DDG sells your information to Microsoft, there is no such thing as privacy when $$$ are involved
◧◩◪
88. chucke+it[view] [source] [discussion] 2023-11-22 12:26:03
>>dontup+Io
Yeah, I think Larry there is because ChatGPT has become too important for USA.
◧◩◪
89. _heimd+kt[view] [source] [discussion] 2023-11-22 12:26:14
>>eddtri+ip
There are perks to not signing for anyone that actually worked at OpenAI for on the mission rather than the money.
replies(1): >>Wesley+0h1
◧◩◪
90. pooya1+ot[view] [source] [discussion] 2023-11-22 12:26:32
>>tucnak+i3
> There's no idiots at OpenAI.

Most certainly there are idiots at OpenAI.

replies(1): >>infamo+q41
◧◩◪
91. jddj+qt[view] [source] [discussion] 2023-11-22 12:26:44
>>dontup+Io
The timing of the semiconductor export controls being another datapoint here in support of #1.

Not that it's really in need of additional evidence.

◧◩◪◨⬒⬓⬔⧯
92. doktri+su[view] [source] [discussion] 2023-11-22 12:35:11
>>Jumpin+Er
> Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook

Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.

> If you want to use that definition you might want to also add a criteria for minimum size of the company.

Your feedback is noted.

Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?

replies(2): >>Jumpin+VC >>dpkirc+EX
◧◩
93. iowemo+qv[view] [source] [discussion] 2023-11-22 12:42:25
>>murbar+hc
Perhaps a better example would be 95% of people voted in favour of reinstating apple pie to the menu after not receiving a coherent explanation for removing apple pie from the menu.
◧◩◪◨⬒⬓⬔
94. _djo_+vv[view] [source] [discussion] 2023-11-22 12:43:17
>>karmas+Ab
Sure, I agree. I was referencing only the idea that being smart in one domain automatically means being a good critical thinker in all domains.

I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.

◧◩◪◨
95. roland+Dv[view] [source] [discussion] 2023-11-22 12:44:48
>>nmfish+eq
I would classify their mission "to organize the world's information and make it universally accessible and useful" as some light parading acting in the best interests of humanity.
◧◩◪
96. j_maff+5w[view] [source] [discussion] 2023-11-22 12:48:12
>>rvba+Fp
Doesn't mean we shouldn't hold an organization accountable for their publicized mission statement. Especially its board and directors.
◧◩
97. T-A+8w[view] [source] [discussion] 2023-11-22 12:48:35
>>achron+Wl
> what the heck is Larry Summers doing on that board?

The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.

And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.

◧◩◪◨⬒⬓
98. ameist+ew[view] [source] [discussion] 2023-11-22 12:48:56
>>TheOth+wc
Stupidity is not defined by self-harming actions and beliefs - not sure where you're getting that from.

Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.

replies(1): >>suodua+UD
◧◩
99. mrangl+hw[view] [source] [discussion] 2023-11-22 12:49:25
>>achron+Wl
Said purpose and values are nothing more than an attempted control lever for dark actors, very obviously. People / factions that gain handholds, which otherwise wouldn't exist, and exert control through social pressure nonsense that they don't believe in themselves. As can be extracted from modern street-brawl politics, which utilizes the same terminology to the same effect. And as can be inferred would be the case given OAI's novel and convoluted corporate structure as referenced to the importance of its tech.

We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.

100. JCM9+yw[view] [source] 2023-11-22 12:52:36
>>polite+(OP)
When a politician wins with 98% of the vote do you A) think that person must be an incredible leader , or B) think something else is going on?

Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.

replies(6): >>bad_us+cx >>driver+Rx >>roflc0+jD >>heyjam+4E >>shzhdb+KG >>JVIDEL+YG
◧◩
101. bad_us+cx[view] [source] [discussion] 2023-11-22 12:56:51
>>JCM9+yw
The environment in a small to medium company is much more homogenous than the general population.

When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.

replies(4): >>mstade+Lz >>kcplat+RA >>plorg+TC >>from-n+vL
◧◩◪
102. bnralt+ex[view] [source] [discussion] 2023-11-22 12:57:06
>>hobofa+dp
> From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI".

Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.

The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.

◧◩
103. driver+Rx[view] [source] [discussion] 2023-11-22 13:00:17
>>JCM9+yw
Originally, 65% had signed (505 of 770).
◧◩◪◨
104. bad_us+jy[view] [source] [discussion] 2023-11-22 13:03:28
>>nmfish+eq
> Google never paraded itself around as a non-profit acting in the best interests of humanity.

Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.

replies(1): >>Turing+kH
◧◩
105. mrangl+yy[view] [source] [discussion] 2023-11-22 13:05:22
>>kitsun+ve
What is socially defined as beneficial-to-humanity is functionally mandated by the MSM and therefore capricious, at the least. With that in mind, a translation:

"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".

◧◩◪◨
106. mrangl+7z[view] [source] [discussion] 2023-11-22 13:10:02
>>kortil+t8
Disagreeing with employee actions doesn't mean that you are correct and they failed to think well. Weighting their collective probable profiles, including as insiders, and yours, it would be irrational to conclude that they were in the wrong.
replies(1): >>rewmie+YB
◧◩◪◨⬒
107. mrangl+vz[view] [source] [discussion] 2023-11-22 13:12:15
>>alsodu+1a
But pronouncing that 700 people are bad at critical thinking is convenient when you disagree with them on desired outcome and yet can't hope to argue points.
◧◩◪
108. mstade+Lz[view] [source] [discussion] 2023-11-22 13:13:25
>>bad_us+cx
Not that I have any insight into any of the events at OpenAI, but would just like to point out there are several other reasons why so many people would sign, including but not limited to:

- peer pressure

- group think

- financial motives

- fear of the unknown (Sam being a known quantity)

- etc.

So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.

If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.

[1]: https://www.imdb.com/title/tt2575988/

replies(5): >>FabHK+gH >>phpist+2J >>framap+kK >>bad_us+BP >>ghaff+vp1
◧◩◪
109. Doughn+Vz[view] [source] [discussion] 2023-11-22 13:14:39
>>rvba+Fp
Not so true working for an organisation that is ostensibly a non-profit. People working for a non-profit are generally taking a significant hit to their earning's compared to doing similar work in a for-profit, outside of the top management of huge global charities.

The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.

Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.

replies(2): >>erosen+cN >>rvba+4S
110. kiba+3A[view] [source] 2023-11-22 13:15:11
>>polite+(OP)
They could just reach different conclusion based on their values. OpenAI doesn't seem to be remotely serious about preventing the misuse of AI.
◧◩
111. blitza+sA[view] [source] [discussion] 2023-11-22 13:17:54
>>kitsun+ve
> most likely to benefit humanity as a whole

Giving me a billion $ would be a net benefit to humanity as a whole

replies(1): >>jraph+MG
◧◩◪
112. kcplat+RA[view] [source] [discussion] 2023-11-22 13:20:45
>>bad_us+cx
Personally I have never seen that level of singular agreement in any group of people that large. Especially to the level of sacrifice they were willing to take for the cause. You maybe see that level of devotion to a leader in churches or cults, but in any other group? You can barely get 3 people to agree on a restaurant for lunch.

I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.

replies(5): >>psycho+uC >>panrag+eD >>lxgr+pD >>dahart+i31 >>cellar+Av1
◧◩◪◨
113. rewmie+xB[view] [source] [discussion] 2023-11-22 13:25:00
>>kortil+t8
> Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

That's not the bar you are arguing against.

You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.

We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.

replies(1): >>kortil+Yga
◧◩◪◨⬒
114. rewmie+YB[view] [source] [discussion] 2023-11-22 13:27:34
>>mrangl+7z
> Disagreeing with employee actions doesn't mean that you are correct and they failed to think well.

You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.

◧◩◪◨
115. psycho+uC[view] [source] [discussion] 2023-11-22 13:31:21
>>kcplat+RA
>You can barely get 3 people to agree on a restaurant for lunch.

I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.

replies(1): >>kcplat+bG
◧◩◪
116. plorg+TC[view] [source] [discussion] 2023-11-22 13:34:19
>>bad_us+cx
That sounds like a cult more than a business. I work at a small company (~100 people), and we are more or less aligned with what we're doing you are not going to get close to that consensus on anything. Same for our sister company, about the same size as OpenAI.
replies(2): >>chiefa+mF >>docmar+BJ
◧◩◪◨⬒⬓⬔⧯▣
117. Jumpin+VC[view] [source] [discussion] 2023-11-22 13:34:26
>>doktri+su
> > Do we disagree

I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert

replies(1): >>doktri+Nh1
◧◩◪◨
118. panrag+eD[view] [source] [discussion] 2023-11-22 13:35:27
>>kcplat+RA
>Especially to the level of sacrifice they were willing to take for the cause.

We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.

◧◩
119. roflc0+jD[view] [source] [discussion] 2023-11-22 13:35:41
>>JCM9+yw
The simple answer here is that the boards actions stood to incinerate millions of dollars of wealth for most of these employees, and they were up in arms.

They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.

replies(2): >>whywhy+3F >>citygu+kV
◧◩◪◨
120. lxgr+pD[view] [source] [discussion] 2023-11-22 13:36:47
>>kcplat+RA
Approval rates of >90% are quite common within political parties, to the point where anything less can be seen as an embarrassment to the incumbent head of party.
replies(1): >>kcplat+OF
◧◩◪◨⬒⬓⬔⧯▣
121. suodua+ED[view] [source] [discussion] 2023-11-22 13:38:21
>>golden+yq
OTOH, there's a very good argument to be made that if you recognize that fact, your short-term priority should be to amass a lot of secular power so you can align society to that reality. So the best action to take might be no different.
replies(1): >>golden+gV
◧◩◪◨⬒⬓⬔
122. suodua+UD[view] [source] [discussion] 2023-11-22 13:39:28
>>ameist+ew
Probably from law 3: https://principia-scientific.com/the-5-basic-laws-of-human-s...

But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.

It's a very influential essay.

replies(1): >>ameist+KU
◧◩
123. heyjam+4E[view] [source] [discussion] 2023-11-22 13:40:05
>>JCM9+yw
That argument only works with a “population”, since almost nobody gets to choose which set of politicians they vote for.

In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.

◧◩
124. Burnin+jE[view] [source] [discussion] 2023-11-22 13:42:02
>>achron+Wl
Larry Summers is everywhere and does everything.
replies(1): >>Turing+OG
◧◩◪
125. whywhy+3F[view] [source] [discussion] 2023-11-22 13:45:17
>>roflc0+jD
Wild the employees will go back under a new board and the same structure, first priority should be removing the structure that allowed a small group of people to destroy things over what may have been very petty reasons.
replies(1): >>CydeWe+3I
◧◩◪◨
126. timacl+4F[view] [source] [discussion] 2023-11-22 13:45:19
>>phero_+Cn
Unless somehow a “mission statement” is legally binding it will never mean anything that matters.

Its always written by PR people with marketing in mind

◧◩◪◨
127. chiefa+mF[view] [source] [discussion] 2023-11-22 13:47:12
>>plorg+TC
I also sounds like a very narrow hiring profile. That is, favoring the like-minded and assimilation over free thinking and philosophical diversity. They might give off the appearance of "diversity" on the outside - which is great for PR - but under the hood it's more monocultural. Maybe?
replies(2): >>phpist+EJ >>docmar+8N
◧◩◪◨⬒
128. kcplat+OF[view] [source] [discussion] 2023-11-22 13:49:24
>>lxgr+pD
There is a big difference between “I agree with this…” when a telephone poll caller reaches you and “I am willing to leave my livelihood because my company CEO got fired”
replies(3): >>from-n+MJ >>lxgr+0K >>zerbin+XP
◧◩◪◨⬒
129. kcplat+bG[view] [source] [discussion] 2023-11-22 13:51:08
>>psycho+uC
I was conflicted about originally posting that sentence. I waffled back and forth between, 2, 3, 5…

Three was the compromise I made with myself.

◧◩
130. shzhdb+KG[view] [source] [discussion] 2023-11-22 13:53:46
>>JCM9+yw
> for now the damage is done and OpenAI has a lot of trust rebuilding to do

Nobody cares, except shareholders.

◧◩◪
131. jraph+MG[view] [source] [discussion] 2023-11-22 13:53:48
>>blitza+sA
Depends on what you do (and stop doing) with it :-)
◧◩◪
132. Turing+OG[view] [source] [discussion] 2023-11-22 13:53:54
>>Burnin+jE
At the same time?
replies(1): >>marcos+v61
◧◩
133. JVIDEL+YG[view] [source] [discussion] 2023-11-22 13:54:23
>>JCM9+yw
Odds are if he left there's the possibility their compensation situation might have changed for the worse if not leading to downsizing, that in the edge of a recession with plenty of competition out there.
◧◩◪◨
134. FabHK+gH[view] [source] [discussion] 2023-11-22 13:55:47
>>mstade+Lz
I'd love another season of Silicon Valley, with some Game Stonk and Bored Apes and ChatGPT and FTX and Elon madness.
replies(1): >>jakder+J11
◧◩◪◨⬒
135. Turing+kH[view] [source] [discussion] 2023-11-22 13:56:11
>>bad_us+jy
Maybe, but their actions should definitely not be oriented to decide how to maximize their profit.
replies(1): >>bad_us+vM
◧◩◪◨
136. CydeWe+3I[view] [source] [discussion] 2023-11-22 13:59:08
>>whywhy+3F
Well it's a different group of people and that group will now know the consequences of attempting to remove Sam Altman. I don't see this happening again.
replies(1): >>youcan+kQ
◧◩◪◨
137. phpist+2J[view] [source] [discussion] 2023-11-22 14:04:50
>>mstade+Lz
If the opposing letter that was published from "former" employee's is correct there was already a huge turn over, and the people that remain liked the environment they were in, and I would assume liked the current leadership or they would have left

So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's

If openAI is a huge mono-culture of thinking then they have bigger problems most likely

replies(1): >>bad_us+AQ
◧◩◪◨⬒⬓⬔⧯▣
138. kaibee+cJ[view] [source] [discussion] 2023-11-22 14:05:39
>>belter+Hs
6D Chess is apparently realizing that AGI is not 100% certain and that having 10mm on the run up to AGI is better than not having 10mm on the run up to AGI.
◧◩◪◨
139. docmar+BJ[view] [source] [discussion] 2023-11-22 14:07:16
>>plorg+TC
I think it could be a number of factors:

1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.

2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.

3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.

4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.

I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.

replies(1): >>denton+Se1
◧◩◪◨⬒
140. phpist+EJ[view] [source] [discussion] 2023-11-22 14:07:27
>>chiefa+mF
Superficial "diversity" is all the "diversity" a company needs in the modern era.

Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.

replies(2): >>docmar+dQ >>chiefa+2S
◧◩◪◨⬒⬓
141. from-n+MJ[view] [source] [discussion] 2023-11-22 14:08:01
>>kcplat+OF
But if 100 employees were like "I'm gonna leave" then your livelihood is in jeopardy. So you join in. It's really easy to see 90% of people jumping overboard when they are all on a sinking ship.
◧◩◪◨⬒⬓
142. lxgr+0K[view] [source] [discussion] 2023-11-22 14:08:44
>>kcplat+OF
I don't mean voter approval, I mean party member approval. That's arguably not that far off from a CEO situation in a way in that it's the opinion of and support for the group's leadership by group members.

Voter approval is actually usually much less unanimous, as far as I can tell.

◧◩◪◨
143. framap+kK[view] [source] [discussion] 2023-11-22 14:09:59
>>mstade+Lz
Exactly; there are multitudes of reasons and very little information so why pick any one of them?
◧◩◪
144. from-n+vL[view] [source] [discussion] 2023-11-22 14:14:35
>>bad_us+cx
Right. They aren't actually voting for Sam Altman. If I'm working at a company and I see as little as 10% of the company jump ship I think "I'd better get the frik outta here". Especially if I respect the other people who are leaving. This isnt a blind vote. This is a rolling snowball.

I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.

95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.

So it looks like a VERY normal company.

◧◩◪◨⬒⬓
145. bad_us+vM[view] [source] [discussion] 2023-11-22 14:18:41
>>Turing+kH
What's wrong with profit and wanting to maximize it?

Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.

The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?

replies(2): >>caddem+jW >>saalwe+JW
◧◩◪◨⬒
146. docmar+8N[view] [source] [discussion] 2023-11-22 14:21:05
>>chiefa+mF
I think that most pushes for diversity that we see today are intended to result in monocultures.

DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.

On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.

Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.

This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.

Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.

◧◩◪◨
147. erosen+cN[view] [source] [discussion] 2023-11-22 14:21:38
>>Doughn+Vz
I don't really think this is true in non-charity work. Half of American hospitals are nonprofit and many of the insurance conglomerates are too, like Kaiser. The executives make plenty of money. Kaiser is a massive nonprofit shell for profitmaking entities owned by physicians or whatever, not all that dissimilar to the OpenAI shell idea. Healthcare worked out this way because it was seen as a good model to have doctors either reporting to a nonprofit or owning their own operations, not reporting to shareholders. That's just tradition though. At this point plenty of healthcare operations are just normal corporations controlled by shareholders.
◧◩◪◨
148. bad_us+BP[view] [source] [discussion] 2023-11-22 14:31:16
>>mstade+Lz
You could say that, except that people in this industry are the most privileged, and their earnings and equity would probably be matched elsewhere.

You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.

And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.

replies(1): >>axus+cV
◧◩◪◨⬒⬓
149. zerbin+XP[view] [source] [discussion] 2023-11-22 14:32:51
>>kcplat+OF
But it’s not changing their livelihood. Msft just gives them the same deal. In a lot of ways, it’s similar to the telepoll - people can just say whatever they want, there won’t be big material consequences
◧◩◪◨⬒⬓
150. docmar+dQ[view] [source] [discussion] 2023-11-22 14:33:59
>>phpist+EJ
Agree. This is the monoculture being adopted in actuality -- a racist crusade against "whiteness", and a coercive mechanism to ensure companies don't overstep their usage of resources (carbon footprint), so as not to threaten the existing titans who may have already abused what was available to them before these intracorporate policies existed.

It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.

Ironically, it snuffs out diversity among companies at a 40k foot level.

replies(1): >>jakder+i51
◧◩◪◨⬒
151. youcan+kQ[view] [source] [discussion] 2023-11-22 14:34:19
>>CydeWe+3I
Most likely, but it is cute how confident you are towards humanity learning their lesson.
replies(1): >>tstrim+Nt1
◧◩◪◨⬒
152. bad_us+AQ[view] [source] [discussion] 2023-11-22 14:35:26
>>phpist+2J
What opposing letter, how many people are we talking about, and what was their role in the company?

All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.

replies(1): >>phpist+CT
◧◩◪◨⬒⬓
153. chiefa+2S[view] [source] [discussion] 2023-11-22 14:41:44
>>phpist+EJ
But it's not only the companies, it's the marginalized so desperate to get a "seat at the table" that they don't recognize the table isn't getting bigger and rounder. Instead, it's still the same rectangular that is getting longer and longer.

Participating in that is assimilation.

◧◩◪◨
154. rvba+4S[view] [source] [discussion] 2023-11-22 14:41:48
>>Doughn+Vz
Lots of non profits that collect money for "cause X" spend 95% of money for administration and 5% for cause X.
◧◩◪◨⬒⬓
155. phpist+CT[view] [source] [discussion] 2023-11-22 14:48:23
>>bad_us+AQ
>>What opposing letter, how many people are we talking about, and what was their role in the company?

Not-validated, unsigned letter [1]

>>All companies are monocultures

yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.

yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"

[1] https://wccftech.com/former-openai-employees-allege-deceit-a...

◧◩◪
156. erosen+GT[view] [source] [discussion] 2023-11-22 14:48:40
>>vaxman+fo
For profit subsidiaries can totally influence the nonprofit shell without penalty. Happens all the time. The nonprofit board must act in the interest of the exempt mission rather than just investor value or some other primary purpose. Otherwise it's cool.
replies(1): >>vaxman+tg6
◧◩◪◨⬒⬓⬔⧯
157. ameist+KU[view] [source] [discussion] 2023-11-22 14:53:15
>>suodua+UD
I see. I've never read his work before, thank you.

So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."

◧◩◪◨⬒
158. axus+cV[view] [source] [discussion] 2023-11-22 14:54:41
>>bad_us+BP
I'm imagining they see themselves in the position of Microsoft employees about to release Windows 95, or Apple employees about to release the iPhone... and someone wants to get rid of Bill Gates or Steve Jobs.
replies(1): >>rvnx+351
◧◩◪◨⬒⬓⬔⧯▣▦
159. golden+gV[view] [source] [discussion] 2023-11-22 14:54:49
>>suodua+ED
Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)

Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.

[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html

[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit

[3] https://en.wikipedia.org/wiki/Planck_constant

Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)

I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.

◧◩◪
160. citygu+kV[view] [source] [discussion] 2023-11-22 14:55:08
>>roflc0+jD
I don’t understand how the fact they went from a nonprofit into a for-profit subsidiary of one of the most closed-off anticompetitive megacorps in tech is so readily glossed over. I get it, we all love money and Sam’s great at generating it, but anyone who works at OpenAI besides the board seems to be morally bankrupt.
replies(5): >>gdhkgd+P01 >>Zpalmt+N51 >>endtim+Tp1 >>rozap+oz1 >>cma+1N1
◧◩◪◨⬒⬓⬔
161. caddem+jW[view] [source] [discussion] 2023-11-22 14:58:40
>>bad_us+vM
The problem is moreso trying to maximize profit after claiming to be a nonprofit. Profit can be a good driving force but it is not perfect. We have nonprofits for a reason, and it is shameful to take advantage of this if you are not functionally a nonprofit. There would be nothing wrong with OpenAI trying to maximize profits if they were a typical company.
◧◩◪◨⬒⬓⬔
162. saalwe+JW[view] [source] [discussion] 2023-11-22 15:00:07
>>bad_us+vM
Because non-profit?

There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.

◧◩◪◨⬒⬓⬔⧯▣▦
163. golden+dX[view] [source] [discussion] 2023-11-22 15:02:01
>>WJW+yr
LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.
replies(1): >>WJW+le1
◧◩◪◨⬒⬓⬔⧯▣
164. dpkirc+EX[view] [source] [discussion] 2023-11-22 15:03:55
>>doktri+su
The two FAANG companies don't compete at a product level, however they do compete for talent, which is significant. Probably significant enough to cause conflicts of interest.
◧◩◪◨
165. gdhkgd+P01[view] [source] [discussion] 2023-11-22 15:17:08
>>citygu+kV
Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.

Self-disclosure: I work for a megacorp.

replies(5): >>yoyohe+I31 >>Beetle+s91 >>yterdy+jj1 >>slg+fy1 >>citygu+7T2
◧◩◪◨⬒
166. jakder+J11[view] [source] [discussion] 2023-11-22 15:21:23
>>FabHK+gH
The only major series with a brilliant, satisfying, and true to form ending and you want to resuscitate it back to life for some cheap curtain calls and modern social commentary, leaving Mike Judge to end it yet again and in such a way that manages to duplicate or exceed the effect of the first time but without doing the same thing? Screw it. Why not?
◧◩◪◨
167. dahart+i31[view] [source] [discussion] 2023-11-22 15:28:02
>>kcplat+RA
This seems extremely presumptuous. Have you ever been inside a company during a coup attempt? The employees’ future pay and livelihood is at stake, why are you assuming they weren’t being asked to sacrifice themselves by not objecting to the coup. The level of agreement could be entirely due to the fact that the stakes are very large, completely unlike your choice for lunch locale. It could also be an outcome of nobody having asked their opinion before making a very big change. I’d expect to see almost everyone at a company agree with each other if the question was, “hey should we close this profitable company and all go get other jobs, or should we keep working?”
replies(1): >>kcplat+wp1
◧◩◪◨⬒
168. yoyohe+I31[view] [source] [discussion] 2023-11-22 15:30:24
>>gdhkgd+P01
We can acknowledge that it's morally bankrupt, while also not blaming them. Hell, I'd probably do the same thing in their shoes. That doesn't make it right.
◧◩◪◨⬒
169. Ajedi3+d41[view] [source] [discussion] 2023-11-22 15:32:53
>>plasma+F9
The issue here is that the board of the non-profit that is supposedly in charge of OpenAI (and whose interests are presumably aligned with the mission statement of the company) seemingly just lost a power struggle with their for-profit subsidiary who is not supposed to be in charge of OpenAI (and whose interests, including the interests of their employees, are aligned with making as much money as possible). Regardless of whether the board's initial decision that started this power struggle was wise or not, don't you find the outcome a little worrisome?
◧◩◪◨
170. infamo+q41[view] [source] [discussion] 2023-11-22 15:33:38
>>pooya1+ot
The current board won't be at OpenAI much longer.
◧◩◪◨⬒⬓
171. rvnx+351[view] [source] [discussion] 2023-11-22 15:36:35
>>axus+cV
See, neither Bill Gates nor Steve Jobs are around these companies, and all is fine.

Apple and Microsoft even have the strongest financial results in their lifetime.

replies(2): >>roncha+Ea1 >>ghodit+wc1
◧◩◪◨⬒⬓⬔
172. jakder+i51[view] [source] [discussion] 2023-11-22 15:37:30
>>docmar+dQ
It's not a crusade against whiteness. Unless you're unhinged and believe a single phenotype that prevents skin cancer is somehow an obvious reflection of genetic inferiority and that those lacking it have a historical destiny to rule over the rest and are entitled to institutional privileges over them, it makes sense that companies with employees not representative of the overall population have hiring practices that are problematic, albeit not necessarily being as explicitly racist as you are.
replies(1): >>docmar+sh1
◧◩◪
173. im3w1l+j51[view] [source] [discussion] 2023-11-22 15:37:37
>>coldte+ho
Ianal, but given that OpenAI Inc is a 501(c)(3) public charity wouldn't that mean that the mission statement has some actual legal power to it?
◧◩◪◨
174. Zpalmt+N51[view] [source] [discussion] 2023-11-22 15:39:06
>>citygu+kV
Why would they be morally bankrupt? Do the employees have to care if it's a non profit or a for profit?

And if they do prefer it as a for profit company, why would that make them morally bankrupt?

◧◩◪◨
175. marcos+v61[view] [source] [discussion] 2023-11-22 15:42:57
>>Turing+OG
All at once.
◧◩◪◨⬒⬓⬔⧯▣
176. Zpalmt+z81[view] [source] [discussion] 2023-11-22 15:52:36
>>golden+yq
What about security for your children?
replies(1): >>golden+Bf1
◧◩◪◨
177. Zpalmt+091[view] [source] [discussion] 2023-11-22 15:54:33
>>g-b-r+K6
Why? Did they have to sign a charter affirming their commitment to the mission when they were hired?
◧◩◪◨⬒
178. Beetle+s91[view] [source] [discussion] 2023-11-22 15:56:22
>>gdhkgd+P01
> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

And while also working for a for-profit company.

◧◩◪◨⬒⬓⬔
179. roncha+Ea1[view] [source] [discussion] 2023-11-22 16:01:53
>>rvnx+351
Gates and Jobs helped establish these companies as the powerhouses they are today with their leadership in the 90s and 00s.

It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?

◧◩◪◨⬒⬓⬔
180. ghodit+wc1[view] [source] [discussion] 2023-11-22 16:10:53
>>rvnx+351
Now go back in time and cut them before their companies took off.
◧◩◪◨⬒⬓⬔⧯▣▦▧
181. WJW+le1[view] [source] [discussion] 2023-11-22 16:18:01
>>golden+dX
You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.

I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.

replies(1): >>golden+we1
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
182. golden+we1[view] [source] [discussion] 2023-11-22 16:18:51
>>WJW+le1
Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.
◧◩◪◨⬒
183. denton+Se1[view] [source] [discussion] 2023-11-22 16:20:26
>>docmar+BJ
> Younger folks probably don't look highly at boards in general, because they never get to interact with them.

Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.

replies(1): >>docmar+Hi1
◧◩◪◨⬒⬓⬔⧯▣▦
184. golden+Bf1[view] [source] [discussion] 2023-11-22 16:22:59
>>Zpalmt+z81
It is for the safety of everyone. The kids will die too if we don't get this right.
◧◩◪◨
185. Wesley+0h1[view] [source] [discussion] 2023-11-22 16:30:48
>>_heimd+kt
Maybe they're working for both, but when push comes to shove they felt like they had no choice? In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?

replies(1): >>_heimd+jH1
◧◩◪◨⬒⬓⬔⧯
186. docmar+sh1[view] [source] [discussion] 2023-11-22 16:32:52
>>jakder+i51
Unfortunately you are wrong, and this kind of rhetoric has not only made calls for white genocide acceptable and unpunished, but has incited violence specifically against Caucasian people, as well as anyone who is perceived to adopt "white" thinking such as Asian students specifically, and even Black folks who see success in their life as a result of adopting longstanding European/Western principles in their lives.

Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.

DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.

Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.

Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.

◧◩◪◨⬒⬓⬔⧯▣▦
187. doktri+Nh1[view] [source] [discussion] 2023-11-22 16:34:09
>>Jumpin+VC
> I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

Do you agree that the following company pairs are competitors?

    * FB : TikTok
    * TikTok : YT
    * YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix.

...

To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.

◧◩◪◨⬒⬓
188. docmar+Hi1[view] [source] [discussion] 2023-11-22 16:38:03
>>denton+Se1
I don't think the age of the board members matters, but rather that younger generations have been taught to criticize boards of any & every company for their myriad decisions to sacrifice good things for profit, etc.

It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.

◧◩◪◨⬒
189. yterdy+jj1[view] [source] [discussion] 2023-11-22 16:40:31
>>gdhkgd+P01
If some of the smartest people on the planet are willing to sell the rest of us out for Comfy Lifestyle Money (not even Influence State Politics Money), then we are well and truly Capital-F Fucked.
replies(1): >>deckar+aw1
◧◩◪
190. Cheeze+fl1[view] [source] [discussion] 2023-11-22 16:50:12
>>grafta+Yf
I wouldn't really give OpenAI credit for lasting 3 years. OpenAI lasted until they moment they had a successful commercial product. Principles are cheap when there is no actual consequences to sticking to them.
◧◩◪◨
191. ghaff+vp1[view] [source] [discussion] 2023-11-22 17:09:06
>>mstade+Lz
Signing petitions is also cheap. It doesn't mean that everyone signing has thought deeply and actually made a life-changing decision.
◧◩◪◨⬒
192. kcplat+wp1[view] [source] [discussion] 2023-11-22 17:09:09
>>dahart+i31
I have had a long career and have been through hostile mergers several times and at no point have I ever seen large numbers of employees act outside of their self-interest for an executive. It just doesn’t happen. Even in my career, with executives who are my friends, I would not act outside my personal interests. When things are corporately uncertain and people worry about their working livelihoods they just don’t tend to act that way. They tend to hunker heads down or jump independently.

The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.

replies(1): >>dahart+di2
◧◩◪◨
193. endtim+Tp1[view] [source] [discussion] 2023-11-22 17:11:34
>>citygu+kV
> anyone who works at OpenAI besides the board seems to be morally bankrupt.

People concerned about AI safety were probably not going to join in the first place...

◧◩◪◨⬒⬓⬔
194. svnt+3t1[view] [source] [discussion] 2023-11-22 17:26:08
>>doktri+hp
I’m not sure how the point stands. The iPhone was introduced during that tenure, then the App Store, then Jobs decided Google was also headed toward their own full mobile ecosystem, and released Schmidt. None of that was a conflict of interest at the beginning. Jobs initially didn’t even think Apple would have an app store.

Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

You forgot to do Oracle and Tesla.

replies(1): >>doktri+fD1
◧◩◪◨⬒⬓
195. tstrim+Nt1[view] [source] [discussion] 2023-11-22 17:29:04
>>youcan+kQ
Humanity no. But it's not humanity on the OpenAI board. It's 9 individuals. Individuals have amazing capacity for learning and improvement.
◧◩◪◨
196. cellar+Av1[view] [source] [discussion] 2023-11-22 17:37:11
>>kcplat+RA
There are plenty of examples of workers unions voting with similar levels of agreement. Here are two from the last couple months:

> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.

https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...

> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.

https://variety.com/2023/biz/news/wga-ratify-contract-end-st...

◧◩◪◨⬒⬓
197. deckar+aw1[view] [source] [discussion] 2023-11-22 17:39:12
>>yterdy+jj1
We already know some of the smartest people are willing to sell us out. Because they work for FAANG ad tech, spending their days figuring out how to maximize the eyeballs they reach while sucking up all your privacy.

It's a post-"Don't be evil" world today.

replies(1): >>jacque+RT1
◧◩◪◨⬒
198. slg+fy1[view] [source] [discussion] 2023-11-22 17:47:21
>>gdhkgd+P01
> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.

◧◩◪◨
199. rozap+oz1[view] [source] [discussion] 2023-11-22 17:52:13
>>citygu+kV
Easy to see how humans would join a non profit for the vibes, and then when they create one of the most compelling products of the last decade worth billions of dollars, quickly change their thinking into "wait, i should get rewarded for this".
◧◩◪◨
200. deckar+Az1[view] [source] [discussion] 2023-11-22 17:53:52
>>nmfish+eq
> Google lasted a good 10 years

not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.

replies(1): >>nmfish+B53
◧◩◪◨⬒⬓⬔⧯
201. doktri+fD1[view] [source] [discussion] 2023-11-22 18:07:36
>>svnt+3t1
> Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.

replies(1): >>svnt+PB2
◧◩◪◨⬒
202. _heimd+jH1[view] [source] [discussion] 2023-11-22 18:24:48
>>Wesley+0h1
> In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.

At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.

◧◩◪◨
203. cma+1N1[view] [source] [discussion] 2023-11-22 18:47:11
>>citygu+kV
Supposedly they had about 50% of employees leave in the year of the conversion to for-profit.
◧◩◪◨⬒⬓⬔
204. jacque+RT1[view] [source] [discussion] 2023-11-22 19:18:16
>>deckar+aw1
If half of the brainpower invested in advertising food would go towards world hunger we'd have too much food.
◧◩◪◨⬒⬓
205. dahart+di2[view] [source] [discussion] 2023-11-22 21:21:51
>>kcplat+wp1
> at no point have I ever seen large numbers of employees act outside of their self-interest for an executive.

This is still making the same assumption. Why are you assuming they are acting outside of self-interest?

replies(1): >>kcplat+qm2
◧◩◪◨⬒⬓⬔
206. kcplat+qm2[view] [source] [discussion] 2023-11-22 21:42:23
>>dahart+di2
If you are willing to leave a paycheck because of someone else getting slighted, to me, that is acting against your own self-interest. Assuming of course you are willing to actually leave. If it was a bluff, that still works against your self-interest by factioning against the new leadership and inviting retaliation for your bluff.
replies(1): >>dahart+Rw2
◧◩◪◨⬒⬓⬔⧯
207. dahart+Rw2[view] [source] [discussion] 2023-11-22 22:40:04
>>kcplat+qm2
Why do you assume they were willing to leave a paycheck because of someone else getting slighted? If that were the case, then it is unlikely everyone would be in agreement. Which indicates you might be making incorrect assumptions, no? And, again, why assume they were threatening to leave a paycheck at all? That’s a bad assumption; MS was offering a paycheck. We already know their salaries weren’t on the line, but all future stock earnings and bonuses very well might be. There could be other reasons too, I don’t see how you can conclude this was either a bluff or not self-interest without making potentially bad assumptions.
replies(1): >>kcplat+ZW2
◧◩◪◨⬒⬓⬔⧯▣
208. svnt+PB2[view] [source] [discussion] 2023-11-22 23:09:06
>>doktri+fD1
Your concrete example is Netflix’s CEO saying he doesn’t want to do advertising because he missed the boat and was on Facebook’s board and as a result didn’t believe he had the data to compete as an advertising platform.

Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.

He is explicitly saying they don’t compete. And they don’t.

◧◩◪◨⬒
209. citygu+7T2[view] [source] [discussion] 2023-11-23 00:43:26
>>gdhkgd+P01
I guess my qualm is that this is the cost of doing business, yet people are outraged at the board because they’re not going to make truckloads of money in equity grants. That’s the morally bankrupt part in my opinion.

If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.

◧◩◪◨⬒⬓⬔⧯▣
210. kcplat+ZW2[view] [source] [discussion] 2023-11-23 01:08:23
>>dahart+Rw2
They threatened to quit. You don’t actually believe that a company would be willing to still provide them a paycheck if they left the company do you?

At this point I suspect you are being deliberately obtuse. Have a good day.

replies(1): >>dahart+B03
◧◩◪◨⬒⬓⬔⧯▣▦
211. dahart+B03[view] [source] [discussion] 2023-11-23 01:33:44
>>kcplat+ZW2
They threatened to quit by moving to Microsoft, didn’t you read the letter? MS assured everyone jobs if they wanted to move. Isn’t making incorrect assumptions and sticking to them in the face of contrary evidence and not answering direct questions the very definition of obtuse?
◧◩◪◨⬒
212. nmfish+B53[view] [source] [discussion] 2023-11-23 02:12:28
>>deckar+Az1
I have no objection to companies[0] making money. It's discarding the philosophical foundations of the company to prioritize quarterly earnings that is offensive.

I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.

[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.

◧◩◪◨
213. vaxman+tg6[view] [source] [discussion] 2023-11-24 04:06:50
>>erosen+GT
yeah, all they have to do is pray for humanity to not let the magic AI out of the bottle and they’re free to have a $91b valuation and flaunt it in the media for days.. https://youtu.be/2HJxya0CWco
◧◩◪◨⬒⬓⬔
214. kortil+qga[view] [source] [discussion] 2023-11-25 19:32:44
>>karmas+Ab
Based on the behavior of lots of smart people I worked at with Google during Google’s good times, critical thinking is definitely in the minority party. Brilliant people from Stanford, Berkeley, MIT, etc would all be leading experts in this or that but would lack critical thinking because they were never forced to develop that skill.

Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.

◧◩◪◨⬒
215. kortil+Yga[view] [source] [discussion] 2023-11-25 19:36:19
>>rewmie+xB
You’re projecting a lot. I made a comment about one false premise, nothing more, nothing less.
[go to top]