Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.
The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.
When someone runs a model in a reasonably durable housing with a battery?
(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)
This sounds, to me, like the company leadership want the ability to do some sort of picking of winners and losers, bypassing the electorate.
And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.
Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.
Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.
Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?
If you live in a city right now there are millions of networked computers that humans depend on in their everyday life and do not want to turn off. Many of those computers keep humans alive (grid control, traffic control, comms, hospitals etc). Some are actual robotic killing machines but most have other purposes. Hardly any are air-gapped nowadays and all our security assumes the network nodes have no agency.
A super intelligence residing in that network would be very difficult to kill and could very easily kill lots of people (destroy a dam for example), however that sort of crude threat is unlikely to be a problem. There are lots of potentially bad scenarios though many of them involving the wrong sort of dictator getting control of such an intelligence. There are legitimate concerns here IMO.
This is incredibly dumb, which is why those of us who study the intersection of AI and global strategic stability are advocating a change to a different doctrine called Decide Under Attack.
Decide Under Attack has been shown by game theory to have equally strong deterrence as Launch On Warning, while also having a much much lower chance of accidental or terrorist-triggered war.
Here is the paper that introduced Decide Under Attack:
A Commonsense Policy for Avoiding a Disastrous Nuclear Decision, Admiral James A Winnefeld, Jr.
https://carnegieendowment.org/2019/09/10/commonsense-policy-...
https://blog.ucsusa.org/david-wright/nuclear-false-alarm-950...
In this case, it turns out that a technician mistakenly inserted into a NORAD computer a training tape that simulated a large Soviet attack on the United States. Because of the design of the warning system, that information was sent out widely through the U.S. nuclear command network.Yet everytime there was a "real" attack, somehow the doctrine was not followed (in US or USSR).
It seems to me that the doctrine is not actually followed because leaders understand the consequences and wait for very solid confirmation?
Soviets also had the perimeter system, which was also supposed to relieve pressure for an immediate response.
(1) humanity should not be subjugated
(2) humanity should not go extinct before it’s our time
Even Kim Jong Un would agree with these principles.
Currently, any AGI or ASI built based on any of the known architectures contemplated in the literature which have been invented thus far would not meet a beyond-a-reasonable-doubt standard of being aligned with these two values.
You know those stories where someone makes a pact with the devil/djin/other wish granting entity, and the entity does one interpretation of what was wished, but since it is not what the wisher intended it all goes terribly wrong? The idea of alignment is to make the djin which not only can grant wishes, but it does them according to the unstated intention of the wisher.
You might have heard the story of the paper clip maximiser. The leadership of the paperclip factory buys one of those fancy new AI agents and asks it to maximise paperclip production.
What a not-well aligned AI might do: Reach out through the internet to a drug cartel’s communication nodes. Hack the communications and take over the operation. Optimise the drug traficking operations to gain more profit. Divert the funds to manufacture weapons for multiple competing factions on multiple crisis points on Earth. Use the factions against each other. Divert the funds and the weapons to protect a rapidly expanding paperclip factory. Manipulate and blackmail world leaders into inaction. If the original leaders of the paperclip factory try to stop the AI eliminate them, since that is the way to maximise paper clip production. And this is just the begining.
What a well alligned AI would do: Fine tune the paperclip manufacturing machinery to eliminate rejects. Reorganise the factory layout to optimise logistics. Run a succesfull advertising campaign which leads to a 130% increase in sales. (Because clearly this is what the factory owner intended it to do. Altough they did a poor job of expressing their wishes.)
The specific concern that we in DISARM:SIMC4 have is that as AI systems start to be perceived as being smarter (due to being better and better at natural language rhetoric and at generating infographics), people in command will become more likely to set aside their skepticism and just trust the computer, even if the computer is convincingly hallucinating.
The tendency of decision makers (including soldiers) to have higher trust in smarter-seeming systems is called Automation Bias.
> The dangers of automation bias and pre-delegating authority were evident during the early stages of the 2003 Iraq invasion. Two out of 11 successful interceptions involving automated US Patriot missile systems were fratricides (friendly-fire incidents).
https://thebulletin.org/2023/02/keeping-humans-in-the-loop-i...
Perhaps Stanislav Petrov would not have ignored the erroneous Soviet missile warning computer he operated, if it generated paragraphs of convincing text and several infographics as hallucinated “evidence” of the reality of the supposed inbound strike. He himself later recollected that he felt the chances of the strike being real were 50-50, an even gamble, so in this situation of moral quandary he struggled for several minutes, until, finally, he went with his gut and countermanded the system which required disobeying the Soviet military’s procedures and should have gotten him shot for treason. Even a slight increase in the persuasiveness of the computer’s rhetoric and graphics could have tipped this to 51-49 and thus caused our extinction.
'.. before it's our time' is definitely in the eye of the beholder.
And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.
Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.
[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
I'm not actively worried about it, but let's not pretend something with all of the information in the world and great intelligence couldn't pull it off.