zlacker

[return to "Microsoft was blindsided by OpenAI's ouster of CEO Sam Altman"]
1. ryanSr+ab[view] [source] 2023-11-18 00:34:29
>>aarond+(OP)
Microsoft invested $10b and owns 49% of OpenAi. Yet they don’t have a board seat? Thats genuinely insane, and seems like a huge issue.
◧◩
2. xxpor+Ab[view] [source] 2023-11-18 00:36:24
>>ryanSr+ab
They invested in the for-profit, not the non-profit. The non-profit controls the for-profit.
◧◩◪
3. samspe+Wc[view] [source] 2023-11-18 00:43:32
>>xxpor+Ab
There are now 4 people left in the OpenAI non-profit board, after the ouster of both Sam and Greg today. 3 of the 4 remaining are virtual unknowns, and they control the fate of OpenAI, both the non-profit and the for-profit. Insane.
◧◩◪◨
4. thepas+aC[view] [source] 2023-11-18 03:32:13
>>samspe+Wc
For anybody, like me, who was wondering who is actually on their board:

>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

Sam is gone, Greg is gone, this leaves: Ilya, Adam, Tasha, and Helen.

Adam: https://en.wikipedia.org/wiki/Adam_D'Angelo?useskin=vector

Tasha: https://www.usmagazine.com/celebrity-news/news/joseph-gordon... (sorry for this very low quality link, it's the best thing I could find explaing who this person is? There isn't a lot of info on her, or maybe google results are getting polluted by this news?)

Helen: https://cset.georgetown.edu/staff/helen-toner/

◧◩◪◨⬒
5. aleph_+zF[view] [source] 2023-11-18 03:58:47
>>thepas+aC
Adam D'Angelo is well-known as the founder of Quora (and Quora's demise).
◧◩◪◨⬒⬓
6. upward+y61[view] [source] 2023-11-18 07:43:22
>>aleph_+zF
Helen Toner is well-known as well, specifically to those of us who work in AI safety. She is known for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy. She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.

Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...

She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.

◧◩◪◨⬒⬓⬔
7. google+8m1[view] [source] 2023-11-18 10:02:10
>>upward+y61
She has an h index of 8 :/ thats tiny for those that are unaware, in property much every field. AI papers are getting an infinite number of citations now a days because the field is exploding - just goes to show no one doing actual AI research cares about her work

Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.

◧◩◪◨⬒⬓⬔⧯
8. upward+on1[view] [source] 2023-11-18 10:12:09
>>google+8m1
> She has an h index of 8 :/ thats tiny

Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.

> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.

The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.

◧◩◪◨⬒⬓⬔⧯▣
9. bluebl+sG1[view] [source] 2023-11-18 12:34:45
>>upward+on1
> The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating

Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?

What's the best place for a non-expert to read about this?

[go to top]