zlacker

A Timeline of the OpenAI Board

submitted by prawn+(OP) on 2023-11-19 07:39:23 | 193 points 137 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
◧◩◪
14. upward+p9[view] [source] [discussion] 2023-11-19 09:13:02
>>peyton+e8
The article refers to Tech Twitter, and that’s where the sexism is.

People on Twitter are making degrading memes of her and posting weird, creepy, harassing comments like this: “Why does Helen Toner have shelf tits” https://x.com/smolfeelshaver/status/1726073304136450511?s=46

Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.

> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.

Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:

“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure

◧◩◪
24. upward+Fa[view] [source] [discussion] 2023-11-19 09:24:50
>>siva7+I9
You’re definitely correct that her CV doesn’t adequately capture her reputation. I’ll put it this way, I meet with a lot of powerful people and I was equally nervous when I met with her as when I met with three officers of the US Air Force. She holds comparable influence to a USAF Colonel.

She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).

Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.

If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/

◧◩◪
31. threes+Kc[view] [source] [discussion] 2023-11-19 09:42:17
>>siva7+I9
Her background looks pretty solid to me given that AI is a recent field:

https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en

https://futureshub.anu.edu.au/experts/helen-toner

And she setup the Center for Security and Emerging Technology at Georgetown so also has some degree of executive experience.

◧◩◪◨
36. anupam+cf[view] [source] [discussion] 2023-11-19 10:04:07
>>YetAno+Pb
Interesting that Sam is also an investor in Quora: https://blog.samaltman.com/quora

It’s convulated

◧◩◪
44. upward+Cj[view] [source] [discussion] 2023-11-19 10:46:44
>>garden+9g
She co-founded an entire think tank. A highly-respected one at that; CSET is up there as one of the top 5 think tanks on NatSec issues.

https://80000hours.org/podcast/episodes/helen-toner-on-secur...

    How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.
◧◩◪
50. upward+Ak[view] [source] [discussion] 2023-11-19 10:54:44
>>hhjink+Nj
Good question. We were the first team to demonstrate that this type of vulnerability exists in LLMs. We then made an immediate responsible disclosure to OpenAI, which is confirmed as the first disclosure of its kind by OWASP:

https://github.com/OWASP/www-project-top-10-for-large-langua...

In the citations:

14. Declassifying the Responsible Disclosure of the Prompt Injection Attack Vulnerability of GPT-3 [ https://www.preamble.com/prompt-injection-a-critical-vulnera... ]: Preamble; earliest disclosure of Prompt Injection

◧◩◪
53. upward+nl[view] [source] [discussion] 2023-11-19 11:03:12
>>huyter+Ng
> Tell engineers to make sure the AI is not bigoted?

That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.

AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.

For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.

For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:

https://doras.dcu.ie/25405/2/Catalytic%20nuclear%20war.pdf

◧◩◪◨
54. resolu+Ul[view] [source] [discussion] 2023-11-19 11:09:01
>>upward+Fa
> She holds comparable influence to a USAF Colonel.

That's... not very much? There are 4,400 colonels in the USAF: https://diversity.defense.gov/LinkClick.aspx?fileticket=gxMV...

◧◩
62. layer8+Xo[view] [source] [discussion] 2023-11-19 11:41:32
>>userna+E9
I don’t remember which media publication, but at least one of the ones posted on HN on Friday/Saturday noted that three board members had resigned this year, and it was also mentioned in related HN threads that this is probably what has made Friday's vote possible in the first place.

Edit: This one for example: https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...

◧◩◪◨⬒
65. tg180+Pp[view] [source] [discussion] 2023-11-19 11:49:19
>>huyter+Qm
The literature is full of references to the application of the precautionary principle to the development of AI.

https://www.researchgate.net/publication/331744706_Precautio...

https://www.researchgate.net/publication/371166526_Regulatin...

https://link.springer.com/article/10.1007/s11569-020-00373-5

https://www.aei.org/technology-and-innovation/treading-caref...

https://itif.org/publications/2019/02/04/ten-ways-precaution...

It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!

◧◩◪
67. Sunhol+vq[view] [source] [discussion] 2023-11-19 11:55:37
>>lordna+bo
She cofounded Fellow Robots, which shipped a product scanner on wheels to 11 stores[1] and then quietly folded up (homepage is now parked by GoDaddy).

[1] https://www.therobotreport.com/navii-autonomous-retail-robot...

◧◩◪
79. baking+pC[view] [source] [discussion] 2023-11-19 13:40:05
>>patric+Gg
You are thinking of a public charity. Not all non-profits are public charities.

https://smallbusiness.chron.com/difference-between-public-pr...

https://www.irs.gov/charities-non-profits/eo-operational-req...

"Under the tax law, a section 501(c)(3) organization is presumed to be a private foundation unless it requests, and qualifies for, a ruling or determination as a public charity. Organizations that qualify for public charity status include churches, schools, hospitals, medical research organizations, publicly-supported organizations (i.e., organizations that receive a specified portion of their total support from public sources), and certain supporting organizations."

Edit: Looking at the IRS determination letter from November 3, 2016, OpenAI was organized as a public charity under 170(b)(1)(A)(vi) "Organizations Receiving Substantial Support from a Governmental Unit or from the General Public"

Their last 990 form, filed November 15, 2021, for the calendar year 2020, shows total support over the past 5 years (2016-2020) of $133M, only $41M of which was individual donations of over 2% ($2.6M) so they easily met the 5-year public support test.

◧◩
102. chubot+U31[view] [source] [discussion] 2023-11-19 16:34:04
>>userna+E9
Yeah it definitely paints a picture of struggling to keep a board together because AI is "hot", and so many people have conflicts.

It does seem like the whole organization was "born in conflict", starting with Elon and Sam.

Then Reid resigned because of a COI, someone whose wife helped start the "offshoot" Anthropic, and then there was Elon's employee and mother of his children, etc.

I was going to say that some reporters weren't doing their jobs this whole time, but actually there are good links in the article, like this

https://www.bloomberg.com/news/articles/2023-07-13/republica...

It was reported that 3 directors left in 2023.

But yeah I agree it's weird that none of the breathless and short articles even linked to that one!!! (as far as I remember) That's a huge factor in what happened.

◧◩◪◨⬒⬓⬔
134. upward+Cv3[view] [source] [discussion] 2023-11-20 06:58:32
>>vharuc+I51
Great summary of several key points from the article, yes! If you’d like to check out other avenues by which AI could lead to war, check out the papers linked to from this working group I’m a part of callers DISARM:SIMC4: https://simc4.org
◧◩◪
137. gwern+v3r[view] [source] [discussion] 2023-11-27 16:19:07
>>chubot+U31
re Reid Hoffman: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...
[go to top]