People on Twitter are making degrading memes of her and posting weird, creepy, harassing comments like this: “Why does Helen Toner have shelf tits” https://x.com/smolfeelshaver/status/1726073304136450511?s=46
Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.
> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.
Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:
“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure
She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).
Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.
If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/
https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en
https://futureshub.anu.edu.au/experts/helen-toner
And she setup the Center for Security and Emerging Technology at Georgetown so also has some degree of executive experience.
It’s convulated
https://80000hours.org/podcast/episodes/helen-toner-on-secur...
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.https://github.com/OWASP/www-project-top-10-for-large-langua...
In the citations:
14. Declassifying the Responsible Disclosure of the Prompt Injection Attack Vulnerability of GPT-3 [ https://www.preamble.com/prompt-injection-a-critical-vulnera... ]: Preamble; earliest disclosure of Prompt Injection
That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.
AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.
For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.
For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:
That's... not very much? There are 4,400 colonels in the USAF: https://diversity.defense.gov/LinkClick.aspx?fileticket=gxMV...
Edit: This one for example: https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...
https://www.researchgate.net/publication/331744706_Precautio...
https://www.researchgate.net/publication/371166526_Regulatin...
https://link.springer.com/article/10.1007/s11569-020-00373-5
https://www.aei.org/technology-and-innovation/treading-caref...
https://itif.org/publications/2019/02/04/ten-ways-precaution...
It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!
[1] https://www.therobotreport.com/navii-autonomous-retail-robot...
https://smallbusiness.chron.com/difference-between-public-pr...
https://www.irs.gov/charities-non-profits/eo-operational-req...
"Under the tax law, a section 501(c)(3) organization is presumed to be a private foundation unless it requests, and qualifies for, a ruling or determination as a public charity. Organizations that qualify for public charity status include churches, schools, hospitals, medical research organizations, publicly-supported organizations (i.e., organizations that receive a specified portion of their total support from public sources), and certain supporting organizations."
Edit: Looking at the IRS determination letter from November 3, 2016, OpenAI was organized as a public charity under 170(b)(1)(A)(vi) "Organizations Receiving Substantial Support from a Governmental Unit or from the General Public"
Their last 990 form, filed November 15, 2021, for the calendar year 2020, shows total support over the past 5 years (2016-2020) of $133M, only $41M of which was individual donations of over 2% ($2.6M) so they easily met the 5-year public support test.
It does seem like the whole organization was "born in conflict", starting with Elon and Sam.
Then Reid resigned because of a COI, someone whose wife helped start the "offshoot" Anthropic, and then there was Elon's employee and mother of his children, etc.
I was going to say that some reporters weren't doing their jobs this whole time, but actually there are good links in the article, like this
https://www.bloomberg.com/news/articles/2023-07-13/republica...
It was reported that 3 directors left in 2023.
But yeah I agree it's weird that none of the breathless and short articles even linked to that one!!! (as far as I remember) That's a huge factor in what happened.