zlacker

[parent] [thread] 16 comments
1. idontw+(OP)[view] [source] 2023-11-20 06:50:53
How does it actually kill a person? When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?
replies(3): >>dragon+M1 >>upward+R3 >>grey-a+I9
2. dragon+M1[view] [source] 2023-11-20 07:03:34
>>idontw+(OP)
> When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?

When someone runs a model in a reasonably durable housing with a battery?

(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)

replies(1): >>idontw+f3
◧◩
3. idontw+f3[view] [source] [discussion] 2023-11-20 07:13:45
>>dragon+M1
But my point is what is it actually doing to reach out and touch someone in the doomsday scenario?
replies(2): >>LordDr+h6 >>AuryGl+28
4. upward+R3[view] [source] 2023-11-20 07:16:46
>>idontw+(OP)
One route is if AI (not through malice but simply through incompetence) plays a part in a terrorist plan to trick the US and China or US and Russia into fighting an unwanted nuclear war. A working group I’m a part of, DISARM:SIMC4, has a lot of papers about this here: https://simc4.org
replies(2): >>justco+59 >>hurrye+Z9
◧◩◪
5. LordDr+h6[view] [source] [discussion] 2023-11-20 07:32:39
>>idontw+f3
I mean, the cliched answer is "when it figures out how to override the nuclear launch process". And while that cliche might have a certain degree of unrealism, it would certainly be possible for a system with access to arbitrary compute power that's specifically trained to impersonate human personas to use social engineering to precipitate WW3.

And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.

Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.

Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.

replies(1): >>idontw+G9
◧◩◪
6. AuryGl+28[view] [source] [discussion] 2023-11-20 07:45:14
>>idontw+f3
Nukes, power grids, planes, blackmail, etc. Surely you’ve seen plenty of media over the years that’s explored this.
replies(1): >>idontw+pa
◧◩
7. justco+59[view] [source] [discussion] 2023-11-20 07:51:45
>>upward+R3
so the plot of WarGames?
replies(1): >>upward+pd
◧◩◪◨
8. idontw+G9[view] [source] [discussion] 2023-11-20 07:55:13
>>LordDr+h6
How can it call one of those labs and place an order for the apocalypse and I can’t right now?

Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?

replies(1): >>LordDr+On2
9. grey-a+I9[view] [source] 2023-11-20 07:55:37
>>idontw+(OP)
The network is the computer.

If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.

A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.

◧◩
10. hurrye+Z9[view] [source] [discussion] 2023-11-20 07:57:21
>>upward+R3
Since you work on this, do you think leaders will wait until confirmation of actual nuclear detonations, maybe on TV, before believing that a massive attack was launched?
replies(1): >>upward+Yc
◧◩◪◨
11. idontw+pa[view] [source] [discussion] 2023-11-20 07:59:56
>>AuryGl+28
What is “nukes” though? Like the missiles in silos that could have been networked decades ago but still require mechanical keys in order to fire? Like is it just making phone calls pretending to be the president and everyone down the line says “ok let’s destroy the world”?
replies(1): >>AuryGl+Vt6
◧◩◪
12. upward+Yc[view] [source] [discussion] 2023-11-20 08:09:34
>>hurrye+Z9
According to current nuclear doctrine, no, they won’t wait. The current doctrine is called Launch On Warning which means you retaliate immediately after receiving the first indications of incoming missiles.

This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.

Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.

Here is the paper that introduced Decide Under Attack:

A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.

https://carnegieendowment.org/2019/09/10/commonsense-policy-...

replies(1): >>hurrye+Ud
◧◩◪
13. upward+pd[view] [source] [discussion] 2023-11-20 08:11:22
>>justco+59
Exactly. WarGames is very similar to a true incident that occurred in 1979, four years before the release of the film.

https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...

    In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.
◧◩◪◨
14. hurrye+Ud[view] [source] [discussion] 2023-11-20 08:13:38
>>upward+Yc
I know about the doctrine.

Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).

It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?

Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.

replies(1): >>upward+Rj
◧◩◪◨⬒
15. upward+Rj[view] [source] [discussion] 2023-11-20 08:39:32
>>hurrye+Ud
Agree wholeheartedly. Human skepticism of computer systems has saved our species from nuclear extinction multiple times (Stanislav Petrov incident, 1979 NORAD training tapes incident, etc.)

The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.

The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.

> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).

https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...

Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.

◧◩◪◨⬒
16. LordDr+On2[view] [source] [discussion] 2023-11-20 19:02:54
>>idontw+G9
You could if you were educated enough in DNA synthesis and customer service manipulation to do so, and were smart enough to figure out a novel rna sequence based in publicly available data. I'm not, you're not. A superintelligence would be. The base assumption is that any superintelligence is smarter than us, and can solve problems we can't. AI can already come up with novel chemical weapons thousands of times faster than us[1], and it's way dumber than we are.

And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.

Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.

[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

◧◩◪◨⬒
17. AuryGl+Vt6[view] [source] [discussion] 2023-11-21 19:16:31
>>idontw+pa
Perhaps emailing members of whatever terrorist group the exact location, codes, personnel they would need to seize a nuke themselves?

I'm not actively worried about it, but let's not pretend something with all of the information in the world and great intelligence couldn't pull it off.

[go to top]