As a prominent researcher in AI safety (I discovered prompt injection) I should explain that Helen Toner is a big name in the AI safety community - she’s one of the top 20 most respected people in our community, like Rohin Shah.
The “who on earth” question is a good question about Tasha. But grouping Helen in with Tasha is just sexist. By analogy, Tasha is like Kimbal Musk, whereas Helen is like Tom Mueller.
Tasha seems unqualified but Helen is extremely qualified. Grouping them together is sexist and wrong.
This seems to be an instance of “if you hear the dog whistle you’re the dog”
People on Twitter are making degrading memes of her and posting weird, creepy, harassing comments like this: “Why does Helen Toner have shelf tits” https://x.com/smolfeelshaver/status/1726073304136450511?s=46
Search for “Helen Toner” on Twitter and you will see she is being singled out for bullying by a bunch of weird creepy white dudes who I guess apparently work in tech.
> I think the AI safety community is a little early and notoriety therein probably isn’t sufficient to qualify somebody to direct an $80 bn company.
Normally you’d be right. In the specific case of OpenAI, however, their charter requires safety to be the number one priority of their directors, higher than making money or providing stable employment or anything else that a large company normally prioritizes. This is from OpenAI’s site:
“each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial” https://openai.com/our-structure
She is extremely knowledgeable about the details of all the ways that AI can cause catastrophe and also she’s one of the few people with the ear of the top leaders in the DoD (similar to Eric Schmidt in this regard).
Basically, she’s one of a very small number of people who are credibly reducing the chances that AI could cause a war.
If you’d like to learn about or become part of the AI safety community, a good way to start is to check out Rohin Shah’s Alignment Newsletter. http://rohinshah.com/alignment-newsletter/
If you want to be the titan of an industry and do things that put you at the center of media attention, you have to expect comments of this kind and not be surprised when they happen. Whether you are a man, a woman or anybody else.
If you don't expect "not very nice" or ambivalent reactions from people, you are an amateur and you shouldn't be in the board of such a prominent company.
https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en
https://futureshub.anu.edu.au/experts/helen-toner
And she setup the Center for Security and Emerging Technology at Georgetown so also has some degree of executive experience.
Of course you have to be competent on the subject matter, work hard, iterate and have luck.
annnnd you just shredded your credibility with some casual bigotry
https://80000hours.org/podcast/episodes/helen-toner-on-secur...
How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.https://github.com/OWASP/www-project-top-10-for-large-langua...
In the citations:
14. Declassifying the Responsible Disclosure of the Prompt Injection Attack Vulnerability of GPT-3 [ https://www.preamble.com/prompt-injection-a-critical-vulnera... ]: Preamble; earliest disclosure of Prompt Injection
You might want to try a little more humility when making statements like that. You might think it bolsters your credibility but for many it makes you look out of touch and egotistical.
That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.
AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.
For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.
For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:
That's... not very much? There are 4,400 colonels in the USAF: https://diversity.defense.gov/LinkClick.aspx?fileticket=gxMV...
Seems like she's run a robotics company or something like that. Definitely someone in the tech business.
Not everyone on a board needs to have done the exact thing that the company does.
https://www.researchgate.net/publication/331744706_Precautio...
https://www.researchgate.net/publication/371166526_Regulatin...
https://link.springer.com/article/10.1007/s11569-020-00373-5
https://www.aei.org/technology-and-innovation/treading-caref...
https://itif.org/publications/2019/02/04/ten-ways-precaution...
It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!
[1] https://www.therobotreport.com/navii-autonomous-retail-robot...
I'm not sure where I am on the creepy scale but I'm happy to hate on him because I really don't think he should be anywhere near that board. And yes, Helen Toner does have a claim. Not sure about the level of experience in the typical role of a board member but plenty of cred when it comes to the subject matter.
So these three plus Mira Murati make 4 for 4 hot women governing OpenAI. I'm not a Data Scientist but that's a pattern. Not one ugly woman who has a concept of AI governance? Not one single George Elliot-looking genius?
> weird creepy white dudes…
This is racism. And sexist. How do you know it’s white people or dudes?
You have two people left, we have no idea what he / she is, their work are not public outside of specific domain, and no Public / PR exposure to even anyone who follows tech closely.
Those two people we group them together. And they happened to be woman. ( At least we assumed their gender ). And we are now being called sexist? Seriously?
It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".
Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.
IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".
Women are socially allowed to use artificial means to greatly improve their appearance. I'm referring to makeup and expensive hair treatments. Women from upper-middle or upper classes have even more of an advantage in using these. So if you're a thin woman, unless you were unlucky enough to be born disfigured, you're a single afternoon away of looking like a movie star.
If Sam Altman was socially allowed to wear makeup and a wig, you'd call him a heartthrob.
Are you claiming that physical appearance has nothing to do with politics, or that we just shouldn't comment on it?
I think it's pretty obvious that the OpenAI men aren't too attractive by most standards, as opposed to say US presidents, who are mostly sexually attractive.
The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.
But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.
(I hope I've summarized the article well enough.)
No, it's not. They're grouped together because everyone knows who Sama, Greg Brockman, Ilya, and Adam D'Angelo (Quora founder / FB CTO) are, and maybe 5% knew who Helen and Tasha are. You linked to a rando twitter user making fun of her, but I've seen far more putting down Ilya for his hairline.
Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?
"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.
No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.
If so then 4chan had prior art, discovering prompt injections when they made Microsoft's Tay chatbot become a racist on twitter.