zlacker

[return to "A Timeline of the OpenAI Board"]
1. upward+A6[view] [source] 2023-11-19 08:44:50
>>prawn+(OP)
> they asked: who on earth are Tasha McCauley and Helen Toner?

As a prominent researcher in AI safety (I discovered prompt injection) I should explain that Helen Toner is a big name in the AI safety community - she’s one of the top 20 most respected people in our community, like Rohin Shah.

The “who on earth” question is a good question about Tasha. But grouping Helen in with Tasha is just sexist. By analogy, Tasha is like Kimbal Musk, whereas Helen is like Tom Mueller.

Tasha seems unqualified but Helen is extremely qualified. Grouping them together is sexist and wrong.

◧◩
2. huyter+Ng[view] [source] 2023-11-19 10:19:15
>>upward+A6
They are being grouped together because they are the only two on the board with no qualifications. What is an AI safety commission? Tell engineers to make sure the AI is not bigoted?
◧◩◪
3. upward+nl[view] [source] 2023-11-19 11:03:12
>>huyter+Ng
> Tell engineers to make sure the AI is not bigoted?

That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.

AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.

For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.

For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:

https://doras.dcu.ie/25405/2/Catalytic%20nuclear%20war.pdf

◧◩◪◨
4. huyter+Qm[view] [source] 2023-11-19 11:17:36
>>upward+nl
So completely useless then since we are nowhere near that paradigm.
◧◩◪◨⬒
5. margal+6Q[view] [source] 2023-11-19 15:17:34
>>huyter+Qm
It's an industry/field of study that a small group of people with a lot of money think would be neat if it existed and they could get paid for, so they willed it into existence.

It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".

Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.

IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".

[go to top]