zlacker

[parent] [thread] 90 comments
1. jafitc+(OP)[view] [source] 2023-11-22 14:31:43
OpenAI's Future and Viability

- OpenAI has damaged their brand and lost trust, but may still become a hugely successful company if they build great products

- OpenAI looks stronger now with a more professional board, but has fundamentally transformed into a for-profit focused on commercializing LLMs

- OpenAI still retains impressive talent and technology assets and could pivot into a leading AI provider if managed well

---

Sam Altman's Leadership

- Sam emerged as an irreplaceable CEO with overwhelming employee loyalty, but may have to accept more oversight

- Sam has exceptional leadership abilities but can be manipulative; he will likely retain control but have to keep stakeholders aligned

---

Board Issues

- The board acted incompetently and destructively without clear reasons or communication

- The new board seems more reasonable but may struggle to govern given Sam's power

- There are still opposing factions on ideology and commercialization that will continue battling

---

Employee Motivations

- Employees followed the money trail and Sam to preserve their equity and careers

- Peer pressure and groupthink likely also swayed employees more than principles

- Mission-driven employees may still leave for opportunities at places like Anthropic

---

Safety vs Commercialization

- The safety faction lost this battle but still has influential leaders wanting to constrain the technology

- Rapid commercialization beat out calls for restraint but may hit snags with model issues

---

Microsoft Partnership

- Microsoft strengthened its power despite not appearing involved in the drama

- OpenAI is now clearly beholden to Microsoft's interests rather than an independent entity

replies(14): >>qualif+W1 >>nuruma+Y1 >>miohta+g2 >>seydor+w6 >>pauldd+m7 >>orsent+F7 >>sam0x1+i8 >>ensoco+z8 >>jxi+H8 >>amalco+L8 >>Ration+8f >>neonbj+uf >>window+Qh >>scooke+WT2
2. qualif+W1[view] [source] 2023-11-22 14:39:59
>>jafitc+(OP)
No structure or organization is stronger when their leader emerged as "irreplaceable".
replies(5): >>rmbyrr+S2 >>osigur+m3 >>dimitr+25 >>rvnx+qc >>Aunche+9e
3. nuruma+Y1[view] [source] 2023-11-22 14:40:06
>>jafitc+(OP)
Gpt-generated summary?
replies(2): >>Mistle+1a >>foursi+Mg
4. miohta+g2[view] [source] 2023-11-22 14:41:20
>>jafitc+(OP)
> Employees followed the money trail and Sam to preserve their equity and careers

Would you not when the AI safety wokes decide the torch the rewards of your hard work of grinding for years? I feel there is less groupthink and everyone saw the board as it is and their inability lead, or even act rationally. OpenAI did not just become a sinking ship, but it was unnecessary sunk by someone not skin in the game and your personal wealth and success was tied to the ship.

replies(2): >>brooks+h6 >>acjohn+Fu
◧◩
5. rmbyrr+S2[view] [source] [discussion] 2023-11-22 14:43:41
>>qualif+W1
In this case, I don't see as a flaw, but really as Sam's abilities to lead a highly cohesive group and keep it highly motivated and aligned.

I don't personally like him, but I must admit he displayed a lot more leadership skills than I'd recognize before.

It's inherently hard to replace someone like that in any organization.

Take Apple, after losing Jobs. It's not that Apple was a "weak" organization, but really Jobs that was extraordinary and indeed irreplaceable.

No, I'm not comparing Jobs and Sam. Just illustrating my point.

replies(3): >>prh8+q4 >>pk-pro+ob >>scythe+In
◧◩
6. osigur+m3[view] [source] [discussion] 2023-11-22 14:46:22
>>qualif+W1
Seriously, even in a small group of a few hundred people?
replies(1): >>catapa+F4
◧◩◪
7. prh8+q4[view] [source] [discussion] 2023-11-22 14:50:45
>>rmbyrr+S2
What's the difference between leadership skills and cult of following?
replies(4): >>spurgu+6a >>thedal+qb >>TheOth+Jf >>rmbyrr+Pq
◧◩◪
8. catapa+F4[view] [source] [discussion] 2023-11-22 14:51:40
>>osigur+m3
I dunno, seems like a pretty self-evident theory? If your leader is irreplaceable, regardless of group size, that's a single point of failure. I can't figure how a single point of failure could ever make something "stronger". I can see arguments for necessity, or efficiency, given contrivances and extreme contexts. But "stronger" doesn't seem like the assessment for whatever necessitating a single point of failure would be.
replies(3): >>vipshe+9b >>hughw+li >>osigur+Tv2
◧◩
9. dimitr+25[view] [source] [discussion] 2023-11-22 14:53:11
>>qualif+W1
This is false, and I see the corollary as a project having a BDIF, especially if the leader is effective. Sam is unmistakably effective.
replies(1): >>acchow+Z6
◧◩
10. brooks+h6[view] [source] [discussion] 2023-11-22 14:57:08
>>miohta+g2
Yeah, this is like using “groupthink” to describe people fleeing a burning building. There’s maybe some measure of literal truth, but it’s an odd way to frame it.
11. seydor+w6[view] [source] 2023-11-22 14:58:17
>>jafitc+(OP)
who would want to work for an irreplaceable CEO long term
replies(1): >>rvnx+De
◧◩◪
12. acchow+Z6[view] [source] [discussion] 2023-11-22 15:00:00
>>dimitr+25
Have you or anyone close to you ever had to take multiple years of leave from work from a car accident or health condition?
replies(2): >>slingn+E9 >>dimitr+7q2
13. pauldd+m7[view] [source] 2023-11-22 15:01:39
>>jafitc+(OP)
> Peer pressure and groupthink likely also swayed employees more than principles

What makes this "likely"?

Or is this just pure conjecture?

replies(1): >>mrfox3+G8
14. orsent+F7[view] [source] 2023-11-22 15:02:36
>>jafitc+(OP)
> - Mission-driven employees may still leave for opportunities at places like Anthropic

Which might have an oversight from AMZN instead of MSFT ?

15. sam0x1+i8[view] [source] 2023-11-22 15:05:22
>>jafitc+(OP)
> Peer pressure and groupthink likely also swayed employees more than principles

Chilling to hear the corporate oligarchs completely disregard the feelings of employees and deny most of the legitimacy behind these feelings in such a short and sweeping statement

replies(1): >>DSingu+Nf
16. ensoco+z8[view] [source] 2023-11-22 15:06:17
>>jafitc+(OP)
Good points. Anyway I guess nobody will remember the drama in some months so I think the damage done is very manageable for OAI.
◧◩
17. mrfox3+G8[view] [source] [discussion] 2023-11-22 15:06:47
>>pauldd+m7
What would you do if 999 employees openly signed a letter and you are the remaining holdout.
replies(1): >>pauldd+z9
18. jxi+H8[view] [source] 2023-11-22 15:06:50
>>jafitc+(OP)
Was this really motivated by AI safety or was it just Helen Toner’s personal vendetta against Sam?

It doesn’t feel like anything was accomplished besides wasting 700+ people’s time, and the only thing that has changed now is Helen Toner and Tasha McCauley are off the board.

replies(3): >>hn_thr+cc >>cbeach+Fd >>jkapla+kj
19. amalco+L8[view] [source] 2023-11-22 15:06:55
>>jafitc+(OP)
>- Microsoft strengthened its power despite not appearing involved in the drama

Depending on what you mean by "the drama", Microsoft was very clearly involved. They don't appear to have been in the loop prior to Altman's firing, but they literally offered jobs to everyone who left in solidarity with same. Do we really think things like that were not intended to change people's minds?

replies(3): >>Firmwa+L9 >>malfis+fc >>gcanyo+yd
◧◩◪
20. pauldd+z9[view] [source] [discussion] 2023-11-22 15:10:09
>>mrfox3+G8
Is your argument that the 1 employee operated on peer pressure, or the other 999?

Could it possibly be that the majority of OpenAI's workforce sincerely believed a midnight firing of the CEO were counterproductive to their organization's goals?

replies(2): >>dymk+gc >>mrfox3+Uq
◧◩◪◨
21. slingn+E9[view] [source] [discussion] 2023-11-22 15:10:35
>>acchow+Z6
Nope, I've never even __heard__ of someone having to take multiple years of leave from work for any reason. Seems like a fantastically rare event.
replies(2): >>thingi+ae >>yeck+9i
◧◩
22. Firmwa+L9[view] [source] [discussion] 2023-11-22 15:10:56
>>amalco+L8
>but they literally offered jobs to everyone who left in solidarity with same

Offering people jobs is neither illegal nor immoral, no? And wasn't HN also firmly on the side of abolishing non-competes and non-soliciting from employment contracts to facilitate freedom of employment movement and increase industry wages in the process?

Well then, there's your freedom of employment in action. Why be unhappy about it? I don't get it.

replies(2): >>spanka+la >>notaha+5k
◧◩
23. Mistle+1a[view] [source] [discussion] 2023-11-22 15:12:05
>>nuruma+Y1
That was my first thought as well. And now it is the top comment on this post. Isn’t this brave new world OpenAI made wonderful?
replies(1): >>nickpp+3e
◧◩◪◨
24. spurgu+6a[view] [source] [discussion] 2023-11-22 15:12:22
>>prh8+q4
I think an awesome leader would naturally create some kind of cult following, while the opposite isn't true.
replies(1): >>Popeye+yb
◧◩◪
25. spanka+la[view] [source] [discussion] 2023-11-22 15:13:51
>>Firmwa+L9
> Offering people jobs is neither illegal nor immoral

The comment you responded to made neither of those claims, just that they were "involved".

◧◩◪◨
26. vipshe+9b[view] [source] [discussion] 2023-11-22 15:17:15
>>catapa+F4
"Stronger" is ambiguous. If you interpret it as "resilience" then I agree having a single point of failure is usually more brittle. But if you interpret it as "focused", then having a single charismatic leader can be superior.

Concretely, it sounds like this incident brought a lot of internal conflicts to the surface, and they got more-or-less resolved in some way. I can imagine this allows OpenAI to execute with greater focus and velocity going forward, as the internal conflict that was previously causing drag has been resolved.

Whether or not that's "better" or "stronger" is up to individual interpretation.

◧◩◪
27. pk-pro+ob[view] [source] [discussion] 2023-11-22 15:18:33
>>rmbyrr+S2
Can't you imagine a group of people motivated to conduct AI research? I don't understand... All nerds are highly motivated in their areas of passion, and here we have AI research. Why do they need leadership instead of simply having an abundance of resources for the passionate work they do?
replies(3): >>DSingu+ye >>gcanyo+Eg >>jjk166+ej
◧◩◪◨
28. thedal+qb[view] [source] [discussion] 2023-11-22 15:18:41
>>prh8+q4
Results
◧◩◪◨⬒
29. Popeye+yb[view] [source] [discussion] 2023-11-22 15:19:30
>>spurgu+6a
Just like former President Trump?
replies(1): >>marcos+rd
◧◩
30. hn_thr+cc[view] [source] [discussion] 2023-11-22 15:22:13
>>jxi+H8
As someone who was very critical of how the board acted, I strongly disagree. I felt like this Washington Post article gave a very good, balanced overview. I think it sounds like there were substantive issues that were brewing for a long time, though no doubt personal clashes had a huge impact on how it all went down:

https://www.washingtonpost.com/technology/2023/11/22/sam-alt...

◧◩
31. malfis+fc[view] [source] [discussion] 2023-11-22 15:22:27
>>amalco+L8
The GP looks to me like an AI summary. Which would fit with the hallucination that microsoft wasn't involved.
replies(1): >>chanks+3j
◧◩◪◨
32. dymk+gc[view] [source] [discussion] 2023-11-22 15:22:28
>>pauldd+z9
It's almost certain that all employees did not behave the same way for the exact same reasons. And I don't see anyone making an argument about what the exact numbers are, nor does it really matter. Just that some portion of employees were swayed by pressure once the letter reached some critical signing mass.
replies(1): >>pauldd+5r
◧◩
33. rvnx+qc[view] [source] [discussion] 2023-11-22 15:22:57
>>qualif+W1
And correlation does not imply causality.

Example: Put a loser as CEO of a rocket ship, and there is a huge chance that the company will still be successful.

Put a loser as CEO of a sinking ship, and there is a huge chance that the company will fail.

The exceptional CEOs are those who turn failures into successes.

The fact this drama has emerged is the symptom of a failure.

In a company with a great CEO this shouldn’t be happening.

◧◩◪◨⬒⬓
34. marcos+rd[view] [source] [discussion] 2023-11-22 15:27:23
>>Popeye+yb
There are two possible ways to read "the opposite" from the GP.

"A cult follower does not make an exceptional leader" is the one you are looking for.

replies(1): >>0perat+Wq
◧◩
35. gcanyo+yd[view] [source] [discussion] 2023-11-22 15:27:57
>>amalco+L8
I’d go further than just saying “they were involved” —- by offering jobs to everyone who wanted to come with Altman, they were effectively offering to acquire OpenAI, which is worth ~$100B, for (checks notes) zero dollars.
replies(3): >>breadw+Df >>gsuuon+tg >>thepti+oh
◧◩
36. cbeach+Fd[view] [source] [discussion] 2023-11-22 15:28:10
>>jxi+H8
Curious how a relatively unknown academic with links to China [1] attained a board seat on America's hottest and most valuable AI company.

Particularly as she openly expressed that "destroying" that company might be the best outcome. [2]

> During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission.

[1] https://www.chinafile.com/contributors/helen-toner [2] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

replies(2): >>Zpalmt+Ae >>hn_thr+zl
◧◩◪
37. nickpp+3e[view] [source] [discussion] 2023-11-22 15:30:36
>>Mistle+1a
If it’s a good comment, does it really matter if a human or an AI wrote it?
replies(1): >>makewo+bh
◧◩
38. Aunche+9e[view] [source] [discussion] 2023-11-22 15:31:05
>>qualif+W1
I don't think Sam is necessarily irreplaceable. It's just that Helen Toner and co were so detached from the rest of the organization they might as well been on Mars, as demonstrated by their interim CEO pick instantly turning against them.
◧◩◪◨⬒
39. thingi+ae[view] [source] [discussion] 2023-11-22 15:31:13
>>slingn+E9
Not sure if that's intended as irony, but of course, if somebody is taking multiple years off work, you would be less likely hear about it because by definition they're not going to join the company you work for.

I don't think long-term unemployment among people with a disability or other long-term condition is "fantasticaly rare", sadly. This is not the frequency by length of unemployment, but:

https://www.statista.com/statistics/1219257/us-employment-ra...

◧◩◪◨
40. DSingu+ye[view] [source] [discussion] 2023-11-22 15:33:07
>>pk-pro+ob
As far as it goes for me the only endorsements that matter are those of the core engineering and research teaches of OpenAI.

All these opinions of outsiders don’t matter. It’s obvious that most people don’t know Sam personally or professionally and are going off of the combination of: 1. PR pieces being pushed by unknown entities 2. positive endorsements from well known people who are likely know him

Both those sources are suspect. We don’t know the motivation behind their endorsements and for the PR pieces we know the author but we don’t know commissioner.

Would we feel as positive about Altman if it turns out that half the people and PR pieces endorsing him are because government officials pushing for him? Or if the celebrities in tech are endorsing him because they are financially incentivized?

The only endorsements that matter are those of OpenAI employees (ideally those who are not just in his camp because he made them rich).

◧◩◪
41. Zpalmt+Ae[view] [source] [discussion] 2023-11-22 15:33:15
>>cbeach+Fd
Wow, very surprised this is the first I'm hearing of this, seems very suspect
◧◩
42. rvnx+De[view] [source] [discussion] 2023-11-22 15:33:26
>>seydor+w6
Desperate people who have no choice than to wait for someone to remove their golden handcuffs.
43. Ration+8f[view] [source] 2023-11-22 15:35:27
>>jafitc+(OP)
The one piece of this that I question is the employee motivations.

First, they had offers to walk to both Microsoft and Salesforce and be made good. They didn't have to stay and fight to have money and careers.

But more importantly, put yourself in the shoes of an employee and read https://web.archive.org/web/20231120233119/https://www.busin... for what they apparently heard.

I don't know about anyone else. But if I was being asked to choose sides in a he-said, she-said dispute, the board was publicly hinting at really bad stuff, and THAT was the explanation, I know what side I'd take.

Don't forget, when the news broke, people's assumption from the wording of the board statement was that Sam was doing shady stuff, and there was potential jail time involved. And they justify smearing Sam like that because two board members thought they heard different things from Sam, and he gave what looked like the same project to two people???

There were far better stories that they could have told. Heck, the Internet made up many far better narratives than the board did. But that was the board's ACTUAL story.

Put me on the side of, "I'd have signed that letter, and money would have had nothing to do with it."

replies(1): >>TheGRS+Rj
44. neonbj+uf[view] [source] 2023-11-22 15:36:58
>>jafitc+(OP)
As an employee of OpenAI: fuck you and your condescending conclusions about my peers and my motivations.
replies(3): >>jprete+Gh >>alexth+1i >>iamfli+Gl
◧◩◪
45. breadw+Df[view] [source] [discussion] 2023-11-22 15:37:38
>>gcanyo+yd
You mean zero additional dollars. They already gave (checks notes) $13 Billion dollars and own half of the company.
replies(1): >>rvnx+Xf
◧◩◪◨
46. TheOth+Jf[view] [source] [discussion] 2023-11-22 15:37:56
>>prh8+q4
Leadership Gets Shit Done. A cult following wastes everyone's time on ineffectual grandstanding and ego fluffing while everything around them dissolves into incompetence and hostility.

They're very orthogonal things.

replies(1): >>rvnx+qh
◧◩
47. DSingu+Nf[view] [source] [discussion] 2023-11-22 15:38:11
>>sam0x1+i8
Honestly he has a point — but the bigger point to be made is financial incentives. In this case it matters because of the expressed mission statement of OpenAI.

Let’s say there was some non-profit claiming to advance the interests of the world. Let’s say it paid very well to hire the most productive people but they were a bunch of psychopaths who by definition couldn’t care less about anybody but themselves. Should you care about their opinions? If it was a for profit company you could argue that their voice matter. For a non-profit, however, a persons opinion should only matter as far as it is aligned with the non-profit mission.

◧◩◪◨
48. rvnx+Xf[view] [source] [discussion] 2023-11-22 15:38:39
>>breadw+Df
+ according to the rumors on Bloomberg.com / CNBC:

The investment is refundable and has high priority: Microsoft has a priority to receive 75% of the profit generated until the 10B USD have been paid back

+ (checks notes) in addition (!) OpenAI has to spend back the money in Microsoft Cloud Services (where Microsoft takes a cut as well).

◧◩◪
49. gsuuon+tg[view] [source] [discussion] 2023-11-22 15:41:10
>>gcanyo+yd
How has the valuation of OpenAI increased by $20B since this weekend? I feel like every time I see that number it goes up by $10B.
replies(2): >>tacooo+Zh >>sebzim+5i
◧◩◪◨
50. gcanyo+Eg[view] [source] [discussion] 2023-11-22 15:41:45
>>pk-pro+ob
Someone has to set direction. The more people that are involved in that decision process, the slower it will go.

Having no leadership at all guarantees failure.

◧◩
51. foursi+Mg[view] [source] [discussion] 2023-11-22 15:42:51
>>nuruma+Y1
The content strikes me as being more an editorial on what happened vs simply a summary of events.
◧◩◪◨
52. makewo+bh[view] [source] [discussion] 2023-11-22 15:44:30
>>nickpp+3e
Yes.
replies(1): >>nickpp+gi
◧◩◪
53. thepti+oh[view] [source] [discussion] 2023-11-22 15:45:22
>>gcanyo+yd
If the existing packages are worth more than MSFT pay AI researchers (they are, by a lot) then it’s not acquiring OAI for $0. Plausibly it could cost in the $B to buy put every single equity holder, at a $80B+ valuation.

Still a good deal, but your accounting is off.

◧◩◪◨⬒
54. rvnx+qh[view] [source] [discussion] 2023-11-22 15:45:31
>>TheOth+Jf
I also imagine the morale of the people who are currently implementing things, and getting tired of all these politics about who is going to claim success for their work.
◧◩
55. jprete+Gh[view] [source] [discussion] 2023-11-22 15:46:45
>>neonbj+uf
I’m curious about your perceptions of the (median) motivations of OpenAI employees - although of course I understand if you don’t feel free to say anything.
56. window+Qh[view] [source] 2023-11-22 15:47:26
>>jafitc+(OP)
This comment bugs me because it reads like a summary of an article, but it's just your opinions without any explanations to justify them.
◧◩◪◨
57. tacooo+Zh[view] [source] [discussion] 2023-11-22 15:47:52
>>gsuuon+tg
you're off by a bit, the announcement of Sam returning as CEO actually increased OpenAI valuation to $110B last night
◧◩
58. alexth+1i[view] [source] [discussion] 2023-11-22 15:48:12
>>neonbj+uf
Users here often get the narrative and motivations deeply wrong, I wouldn’t take it too personally (Speaking as a peer)
◧◩◪◨
59. sebzim+5i[view] [source] [discussion] 2023-11-22 15:48:24
>>gsuuon+tg
$110B? Where are you getting this valuation of $120B?
◧◩◪◨⬒
60. yeck+9i[view] [source] [discussion] 2023-11-22 15:48:37
>>slingn+E9
In my immediate family I have 3 people that have taken multi-year periods away from work for health reasons. Two are mental health related and the other severe arthritis. 2 of those 3 will probably never work again for the rest of their lives.

I've worked with a contractor that went into a coma during covid. Nearly half a year in a coma, then rehab for many more months. Guy is working now, but not shape.

I don't know the stats, but I'd be surprised if long medical leaves are as rare as you think.

replies(1): >>filled+yD
◧◩◪◨⬒
61. nickpp+gi[view] [source] [discussion] 2023-11-22 15:49:19
>>makewo+bh
Please expand on that.
replies(3): >>iamfli+3m >>Mistle+4R >>makewo+164
◧◩◪◨
62. hughw+li[view] [source] [discussion] 2023-11-22 15:49:50
>>catapa+F4
I guess though, a lot of organizations never develop a cohesive leader at all, and the orgs fall apart. They never had an irreplaceable leader though!
◧◩◪
63. chanks+3j[view] [source] [discussion] 2023-11-22 15:53:26
>>malfis+fc
That's a good callout. I was reading over it and confused who this person was and why they were summarizing but yeah they might've just told ChatGPT to summarize the events of what happened.
◧◩◪◨
64. jjk166+ej[view] [source] [discussion] 2023-11-22 15:54:14
>>pk-pro+ob
It's not hard to motivate them to do the fun parts of the job, the challenge is in convincing some of those highly motivated and passionate nerds to not work on the fun thing they are passionate about and instead do the boring and unsexy work that is nevertheless critical to overall success; to get people with strong personal opinions about how a solution should look to accept a different plan just so that everyone is on the same page, to ensure that people actually have access to the resources they need to succeed without going so overboard that the endeavor lacks the reserves to make it to the finish line, and to champion the work of these nerds to the non-nerds who are nevertheless important stakeholders.
◧◩
65. jkapla+kj[view] [source] [discussion] 2023-11-22 15:54:36
>>jxi+H8
> was it just Helen Toner’s personal vendetta against Sam

I'm not defending the board's actions, but if anything, it sounds like it may have been the reverse? [1]

> In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company... “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight." Senior OpenAI leaders, including Mr. Sutskever... later discussed whether Ms. Toner should be removed

[1] https://www.nytimes.com/2023/11/21/technology/openai-altman-...

replies(1): >>jxi+8H
◧◩
66. TheGRS+Rj[view] [source] [discussion] 2023-11-22 15:56:41
>>Ration+8f
I was thinking the same. The letter symbolized a deep distrust with leadership over the mission and direction of the company. I’m sure financial motivations were involved, but the type of person working at this company can probably get a good paycheck at a lot of places. I think many work at OpenAI for some combination of opportunity, prestige, and altruism, and the weekend probably put all 3 into question.
◧◩◪
67. notaha+5k[view] [source] [discussion] 2023-11-22 15:57:40
>>Firmwa+L9
I'm pretty sure there's a middle ground between recruiters for Microsoft should be banned from approaching other companies' staff to fill roles and Microsoft should be able to dictate decisions made by other companies' boards by publicly announcing that unless they change track it will attempt to hire every single one of their employees to newly created roles.

Funnily enough a bit like there's a middle ground between Microsoft should not be allowed to create browsers or have license agreements and Microsoft should be allowed to dictate bundling decisions made by hardware vendors to control access to the Internet

It's not freedom of employment when funnily enough those jobs aren't actually available to any AI researchers not working for an organisation Microsoft is trying to control.

◧◩◪
68. hn_thr+zl[view] [source] [discussion] 2023-11-22 16:04:38
>>cbeach+Fd
Oh lord, spare me with the "links to China" idiocy. I once ate a fortune cookie, does that mean I have "links to China" too?

Toner got her board seat because she was basically Holden Karnofsky's designated replacement:

> Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

> Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source). Given their connection via Open Philanthropy and the fact that Holden’s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat.

https://loeber.substack.com/p/a-timeline-of-the-openai-board

replies(1): >>cbeach+z53
◧◩
69. iamfli+Gl[view] [source] [discussion] 2023-11-22 16:05:22
>>neonbj+uf
"condescending conclusions" - ask anyone outside of tech how they feel when we talk to them...
◧◩◪◨⬒⬓
70. iamfli+3m[view] [source] [discussion] 2023-11-22 16:06:36
>>nickpp+gi
This is the most cogent argument against AI I've seen so far.

https://youtu.be/iGJcF4bLKd4?si=Q_JGEZnV-tpFa1Tb

replies(1): >>nickpp+WN
◧◩◪
71. scythe+In[view] [source] [discussion] 2023-11-22 16:14:11
>>rmbyrr+S2
Jobs was really unusual in that he was not only a good leader, but also an ideologue with the right obsession at the right time. (Some people like the word "visionary".) That obsession being "user experience". Today it's a buzzword, but in 2001 it was hardly even a term.

The leadership moment that first comes to mind when I think of Steve Jobs isn't some clever hire or business deal, it's "make it smaller".

There have been a very few people like that. Walt Disney comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk is maybe the ideologue without the leadership.)

1: https://www.npr.org/sections/money/2012/01/20/145360447/the-...

◧◩◪◨
72. rmbyrr+Pq[view] [source] [discussion] 2023-11-22 16:27:17
>>prh8+q4
Have you ever seen a useful product produced by a cult?
◧◩◪◨
73. mrfox3+Uq[view] [source] [discussion] 2023-11-22 16:27:53
>>pauldd+z9
Doing the math, it is extremely unlikely for a lot of coin flips to skew from the weight of the coin.

To that end, observing unanimous behavior may imply some bias.

Here, it could be people fearing being a part of the minority. The minority are trivially identifiable, since the majority signed their names on a document.

I agree in your stance that a majority of the workforce disagreed with the way things were handled, but that proportion is likely a subset of the proportion who signed their names on the document, for the reasons stated above.

replies(1): >>pauldd+as
◧◩◪◨⬒⬓⬔
74. 0perat+Wq[view] [source] [discussion] 2023-11-22 16:28:08
>>marcos+rd
While cult followers do not make exceptional leaders, cult leaders are almost by definition exceptional leaders, given they're able to lead the un-indoctrinated into believing an ideology that may not be upheld against critical scrutiny.

There is no guarantee or natural law that an exceptional leader's ideology will be exceptional. Exceptionality is not transitive.

◧◩◪◨⬒
75. pauldd+5r[view] [source] [discussion] 2023-11-22 16:28:58
>>dymk+gc
> some portion

The logic being that if any opinion has above X% support, people are choosing it based on peer pressure.

replies(1): >>mrfox3+lr
◧◩◪◨⬒⬓
76. mrfox3+lr[view] [source] [discussion] 2023-11-22 16:31:01
>>pauldd+5r
The key is that the support is not anonymous.
◧◩◪◨⬒
77. pauldd+as[view] [source] [discussion] 2023-11-22 16:34:16
>>mrfox3+Uq
> it is extremely unlikely for a lot of coin flips to skew from the weight of the coin

So clearly this wasn't a 50/50 coin flip.

The question at hand is whether the skew against the board was sincere or insincere.

Personally, I assume that people are acting in good faith, unless I have evidence to the contrary.

replies(1): >>mrfox3+gO1
◧◩
78. acjohn+Fu[view] [source] [discussion] 2023-11-22 16:45:01
>>miohta+g2
How do you know the "wokes" aren't the ones who were grinding for years?

I suspect OpenAI has an old guard that is disproportionately ideological about AI, and a much larger group of people who joined a rocket ship led by the guy who used to run YC.

◧◩◪◨⬒⬓
79. filled+yD[view] [source] [discussion] 2023-11-22 17:26:35
>>yeck+9i
Yeah, there are thousands of hospitals across the US and they don't run 24/7 shifts just to treat the flu or sprained ankles. Disabling events happen a lot.

(A seriously underrated statistic IMO is how many women leave the workforce due to pregnancy-related disability. I know quite a few who haven't returned to full-time work for years after giving birth because they're still dealing with cardiovascular and/or neurological issues. If you aren't privy to their medical history it would be very easy to assume that they just decided to be stay-at-home mums.)

◧◩◪
80. jxi+8H[view] [source] [discussion] 2023-11-22 17:41:40
>>jkapla+kj
Right, so getting Sam fired was retaliation for that.
◧◩◪◨⬒⬓⬔
81. nickpp+WN[view] [source] [discussion] 2023-11-22 18:09:45
>>iamfli+3m
I am sorry, I greatly respect and admire Nick Cave, but that letter sounded to me like the lament of a scribe decrying the invention of the printing press.

He's not wrong, something is lost and it has to do with what we call our "humanity", but the benefits greatly outweigh that loss.

replies(1): >>makewo+q64
◧◩◪◨⬒⬓
82. Mistle+4R[view] [source] [discussion] 2023-11-22 18:21:52
>>nickpp+gi
I think this summarizes it pretty well. Even if you don't mind the garbage, the future AI will feed on this garbage, creating AI and human brain gray goo.

https://ploum.net/2022-12-05-drowning-in-ai-generated-garbag...

https://en.wikipedia.org/wiki/Gray_goo

replies(1): >>nickpp+821
◧◩◪◨⬒⬓⬔
83. nickpp+821[view] [source] [discussion] 2023-11-22 19:09:18
>>Mistle+4R
Is this a real problem model trainers actually face or is it an imagined one? The Internet is already full of garbage - 90% of the unpleasantness of browsing these days is filtering through mounts and mounds of crap. Some is generated, some is written, but still crap full of wrong and lies.

I would've imagined training sets were heavily curated and annotated. We already know how to solve this problem for training humans (or our kids would never learn anything useful) so I imagine we could solve it similarly for AIs.

In the end, if it's quality content, learning it is beneficial - no matter who produced it. Garbage needs to be eliminated and the distinction is made either by human trainers or already trained AIs. I have no idea how to train the latter but I am no expert in this field - just like (I suspect) the author of that blog.

◧◩◪◨⬒⬓
84. mrfox3+gO1[view] [source] [discussion] 2023-11-22 23:20:38
>>pauldd+as
I'm not saying it's 50/50.

But future signees are influenced by previous signees.

Acting in good faith is different from bias.

◧◩◪◨
85. dimitr+7q2[view] [source] [discussion] 2023-11-23 03:43:12
>>acchow+Z6
Have you ever worked with someone who treats their work as their life? They are borderline psychopaths. As if a health condition or accident will stop them. They'll be taking work calls on the hospital bed.
◧◩◪◨
86. osigur+Tv2[view] [source] [discussion] 2023-11-23 04:39:29
>>catapa+F4
A company is essentially an optimization problem, meant to minimize / maximize some set of metrics. Usually a companies goal is simply to maximize NPV but in OpenAI's case the goal is to maximize AI while minimizing harm.

"Failure" in this context essentially means arriving at a materially suboptimal outcome. Leaders in this situation, can easily be considered "irreplaceable" particularly in the early stages as decisions are incredibly impactful.

87. scooke+WT2[view] [source] 2023-11-23 08:58:33
>>jafitc+(OP)
Many are still going to use this; few will bother to ponder and break the event down like this.
◧◩◪◨
88. cbeach+z53[view] [source] [discussion] 2023-11-23 11:09:05
>>hn_thr+zl
Perhaps you're not aware. Living in Beijing is not equivalent to "once eating a fortune cookie"

> it seems that Helen was picked by Holden to take his seat.

So you can only speculate as to how she got the seat. Which is exactly my point. We can only speculate. And it's a question worth asking, because governance of America's most important AI company is a very important topic right now.

◧◩◪◨⬒⬓
89. makewo+164[view] [source] [discussion] 2023-11-23 17:54:09
>>nickpp+gi
The value of a creation cannot be solely judged by its output. It's hard to explain, it's better to intuit it.
◧◩◪◨⬒⬓⬔⧯
90. makewo+q64[view] [source] [discussion] 2023-11-23 17:56:26
>>nickpp+WN
If you think humanity being lost is acceptable, then it's hard to discuss anything else on this topic.
replies(1): >>nickpp+Hf4
◧◩◪◨⬒⬓⬔⧯▣
91. nickpp+Hf4[view] [source] [discussion] 2023-11-23 18:44:04
>>makewo+q64
> you think humanity being lost is acceptable

I never said that.

[go to top]