zlacker

[parent] [thread] 7 comments
1. huyter+(OP)[view] [source] 2023-11-19 10:19:15
They are being grouped together because they are the only two on the board with no qualifications. What is an AI safety commission? Tell engineers to make sure the AI is not bigoted?
replies(1): >>upward+A4
2. upward+A4[view] [source] 2023-11-19 11:03:12
>>huyter+(OP)
> Tell engineers to make sure the AI is not bigoted?

That’s more the domain of “AI ethics” which I guess is cool but I personally think is much much much less important than AI safety.

AI safety is concerned with preventing human extinction due to (for example) AI causing accidental war or accidental escalation.

For example, making sure that AI won’t turn a heated conventional war into a nuclear war by being used for military intelligence analysis (writing summaries of the current status of a war) and then incorrectly saying that the other side is preparing for a nuclear first strike -- due to the AI being prone to hallucination, or to prompt injection or adversarial examples which can be injected by 3rd-party terrorists.

For more information on this topic, you can reference the recent paper ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states:

https://doras.dcu.ie/25405/2/Catalytic%20nuclear%20war.pdf

replies(1): >>huyter+36
◧◩
3. huyter+36[view] [source] [discussion] 2023-11-19 11:17:36
>>upward+A4
So completely useless then since we are nowhere near that paradigm.
replies(2): >>tg180+29 >>margal+jz
◧◩◪
4. tg180+29[view] [source] [discussion] 2023-11-19 11:49:19
>>huyter+36
The literature is full of references to the application of the precautionary principle to the development of AI.

https://www.researchgate.net/publication/331744706_Precautio...

https://www.researchgate.net/publication/371166526_Regulatin...

https://link.springer.com/article/10.1007/s11569-020-00373-5

https://www.aei.org/technology-and-innovation/treading-caref...

https://itif.org/publications/2019/02/04/ten-ways-precaution...

It's clear what a branch of OpenAI thinks about this ...stuff..., they're making a career out of it. I agree with you!

◧◩◪
5. margal+jz[view] [source] [discussion] 2023-11-19 15:17:34
>>huyter+36
It's an industry/field of study that a small group of people with a lot of money think would be neat if it existed and they could get paid for, so they willed it into existence.

It has about as much real world applicability as those people who charge money for their classes on how to trade crypto. Or maybe "how to make your own cryptocurrency".

Not only does current AI not have that ability, it's not clear that AI with relevant capabilities will ever be created.

IMO it's born out of a generation having grown up on "Ghost in the Shell" imagining that if an intelligence exists and is running on silicon, it can magically hack and exist inside every connected device on earth. But "we can't prove that won't happen".

replies(1): >>vharuc+VO
◧◩◪◨
6. vharuc+VO[view] [source] [discussion] 2023-11-19 16:42:27
>>margal+jz
The hypotheticals explored in the article linked by upwardbound don't deal with an AI acting independently. They detail what could be soon possible for small terrorist groups: flooding social and news media with false information, images, or videos that imply one or more states are planning or about to use nuclear weapons. Responses to suspected nuclear launches have to be swift (article says 3 minutes), so tainting the data at a massive scale using AI would increase the chance of an actual launch.

The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.

But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.

(I hope I've summarized the article well enough.)

replies(2): >>margal+yG2 >>upward+Pe3
◧◩◪◨⬒
7. margal+yG2[view] [source] [discussion] 2023-11-20 02:36:35
>>vharuc+VO
> But if a state did so, they would face annihilation if anyone found out what they were doing.

Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?

"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.

No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.

◧◩◪◨⬒
8. upward+Pe3[view] [source] [discussion] 2023-11-20 06:58:32
>>vharuc+VO
Great summary of several key points from the article, yes! If you’d like to check out other avenues by which AI could lead to war, check out the papers linked to from this working group I’m a part of callers DISARM:SIMC4: https://simc4.org
[go to top]