zlacker

[parent] [thread] 8 comments
1. thepas+(OP)[view] [source] 2023-11-18 03:32:13
For anybody, like me, who was wondering who is actually on their board:

>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

Sam is gone, Greg is gone, this leaves: Ilya, Adam, Tasha, and Helen.

Adam: https://en.wikipedia.org/wiki/Adam_D'Angelo?useskin=vector

Tasha: https://www.usmagazine.com/celebrity-news/news/joseph-gordon... (sorry for this very low quality link, it's the best thing I could find explaing who this person is? There isn't a lot of info on her, or maybe google results are getting polluted by this news?)

Helen: https://cset.georgetown.edu/staff/helen-toner/

replies(1): >>aleph_+p3
2. aleph_+p3[view] [source] 2023-11-18 03:58:47
>>thepas+(OP)
Adam D'Angelo is well-known as the founder of Quora (and Quora's demise).
replies(1): >>upward+ou
◧◩
3. upward+ou[view] [source] [discussion] 2023-11-18 07:43:22
>>aleph_+p3
Helen Toner is well-known as well, specifically to those of us who work in AI safety. She is known for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy. She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.

Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...

She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.

replies(1): >>google+YJ
◧◩◪
4. google+YJ[view] [source] [discussion] 2023-11-18 10:02:10
>>upward+ou
She has an h index of 8 :/ thats tiny for those that are unaware, in property much every field. AI papers are getting an infinite number of citations now a days because the field is exploding - just goes to show no one doing actual AI research cares about her work

Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.

replies(1): >>upward+eL
◧◩◪◨
5. upward+eL[view] [source] [discussion] 2023-11-18 10:12:09
>>google+YJ
> She has an h index of 8 :/ thats tiny

Agreed. She’s not famous for her publications. She’s famous (and intimidating) for being a “power broker” or whatever the term is for the person who is participating in off-the-record one-on-one meetings with military generals.

> Anyway, the idea that the Chinese military or leadership actually will sacrifice a potential advantage in the name of ai safety is absurd.

The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating. There is no advantage to be gained. Any nation that does this is shooting themselves in the foot, or the heart.

replies(3): >>darker+U31 >>bluebl+i41 >>google+WS2
◧◩◪◨⬒
6. darker+U31[view] [source] [discussion] 2023-11-18 12:32:53
>>upward+eL
Terrifying that people need this explained to them, bit they absolutely do, and I'm glad she is doing the work.
◧◩◪◨⬒
7. bluebl+i41[view] [source] [discussion] 2023-11-18 12:34:45
>>upward+eL
> The point of her work (and mine and Dr. Horvitz’s as well) is to make it clear that putting AI in charge of nuclear weapons and other existential questions is self-defeating

Is part of the advocacy convincing nation-states that an AI arms-race is not similar to a nuclear arms-race amounting to a stalemate?

What's the best place for a non-expert to read about this?

replies(1): >>upward+xb8
◧◩◪◨⬒
8. google+WS2[view] [source] [discussion] 2023-11-18 23:13:01
>>upward+eL
She should be more famous now for being an incompetent board member lol - see news about them talking to Sam to bring him back as CEO.
◧◩◪◨⬒⬓
9. upward+xb8[view] [source] [discussion] 2023-11-20 10:59:03
>>bluebl+i41
> What's the best place for a non-expert to read about this?

Thank you for your interest! :) I'd recommend skimming some of the papers cited by this working group I'm in called DISARM:SIMC4; we've tried to collect the most relevant papers here in one place:

https://simc4.org

In response to your question:

At a high level, the academic consensus is that combining AI with nuclear command & control does not increase deterrence, and yet it increases the risk of accidents, and increases the chances that terrorists can "catalyze" a great-power conflict.

So, there is no upside to be had, and there's significant downside, both in terms of increasing accidents and empowering fundamentalist terrorists (e.g. the Islamic State) which would be happy to utilize a chance to wipe the US, China, and Russia all off the map and create a "clean slate" for a new Caliphate to rule the world's ashes.

There is no reason at all to connect AI to NC3 except that AI is "the shiny new thing". Not all new things are useful in a given application.

[go to top]