When someone runs a model in a reasonably durable housing with a battery?
(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)
And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.
Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.
Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.
Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?
If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.
A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.
This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.
Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.
Here is the paper that introduced Decide Under Attack:
A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.
https://carnegieendowment.org/2019/09/10/commonsense-policy-...
https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...
In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).
It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?
Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.
The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.
The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.
> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).
https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...
Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.
And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.
Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.
[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
I'm not actively worried about it, but let's not pretend something with all of the information in the world and great intelligence couldn't pull it off.