zlacker

[parent] [thread] 4 comments
1. c_cran+(OP)[view] [source] 2023-07-06 12:11:15
We haven't pulled the plug on carbon fuels or old nuclear reactors because those things still work and provide benefits. An AI that is trying to kill us instead of doing its job isn't even providing any benefit. It's worse than useless.
replies(1): >>ben_w+vg
2. ben_w+vg[view] [source] 2023-07-06 13:42:18
>>c_cran+(OP)
Do you think AI are unable to provide benefits while also being a risk, like coal and nuclear power? Conversely, what's the benefit of cocaine or cigarettes?

Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.

Nobody knows what any of the risks or mitigations will be, because we haven't done any of it before. All we do know is that optimising systems are effective at manipulating humans, that they can be capable enough to find ways to beat all humans in toy environments like chess, poker, and Diplomacy (the game), and that humans are already using AI (GOFAI, LLMs, SD) without checking the output even when advised that the models aren't very good.

replies(1): >>c_cran+oi
◧◩
3. c_cran+oi[view] [source] [discussion] 2023-07-06 13:50:54
>>ben_w+vg
The benefit of cocaine and cigarettes is letting people pass the bar exam.

An AI would provide benefits when it is, say, actually making paperclips. An AI that is killing people instead of making paperclips is a liability. A company that is selling shredded fingers in their paperclips is not long for this world. Even asbestos only gives a few people cancer slowly, and it does that while still remaining fireproof.

>Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.

Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from. But a rogue AI hiding in a car already has very limited capabilities to harm.

replies(1): >>ben_w+gs
◧◩◪
4. ben_w+gs[view] [source] [discussion] 2023-07-06 14:27:17
>>c_cran+oi
> The benefit of cocaine and cigarettes is letting people pass the bar exam.

how many drugs are you on right now? Even if you think you needed them to pass the bar exam, that's a really weird example to use given GPT-4 does well on that specific test.

One is a deadly cancer stick and not even the best way to get nicotine, the other is a controlled substance that gets life-to-death if you're caught supplying it (possibly unless you're a doctor, but surprisingly hard to google).

> An AI would provide benefits when it is, say, actually making paperclips.

Step 1. make paperclip factory.

Step 2. make robots that work in factory.

Step 3. efficiently grow to dominate global supply of paperclips.

Step 4. notice demand for paperclips is going down, advertise better.

Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.

Step 6. notice a technicality, exploit technicality to achieve goals better; exactly what depends on the details of the goal the AI is given and how good we are with alignment by that point, so the rest is necessarily a story rather than an attempt at realism.

(This happens by default everywhere: in AI it's literally the alignment problem, either inner alignment, outer alignment, or mesa alignment; in humans it's "work to rule" and Goodhart's Law, and humans do that despite having "common sense" and "not being a sociopath" helping keep us all on the same page).

Step 7. moon robots do their own thing, which we technically did tell them to do, but wasn't what we meant.

We say things like "looks like these AI don't have any common sense" and other things to feel good about ourselves.

Step 8. Sales up as entire surface of Earth buried under a 43 km deep layer of moon paperclips.

> Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from.

A VPN, obviously.

But also, in context, how does the AI look different from any random criminal? Except probably more competent. Lot of those around, and organised criminal enterprises can get pretty big even when it's just humans doing it.

Also pretty bad even in the cases where it's a less-than-human-generality CrimeAI that criminal gangs use in a way that gives no agency at all to the AI, and even if you can track them all and shut them down really fast — just from the capabilities gained from putting face tracking AI and a single grenade into a standard drone, both of which have already been demonstrated.

> But a rogue AI hiding in a car already has very limited capabilities to harm.

Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").

replies(1): >>c_cran+Fv
◧◩◪◨
5. c_cran+Fv[view] [source] [discussion] 2023-07-06 14:37:05
>>ben_w+gs
>One is a deadly cancer stick and not even the best way to get nicotine, the other is a controlled substance that gets life-to-death if you're caught supplying it (possibly unless you're a doctor, but surprisingly hard to google).

Regardless of these downsides, people use them frequently in the high stress environments of the bar or med school to deal with said stress. This may not be ideal, but this is how it is.

>Step 3. efficiently grow to dominate global supply of paperclips. >Step 4. notice demand for paperclips is going down, advertise better. >Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.

When you talk about using 'advertising power' to put paperclip factories on the moon, you've jumped into the realm of very silly fantasy.

>Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").

Law enforcement agencies have pretty sophisticated means of bypassing VPNs that they would use against an AI that was actually dangerous. If it was just sending out phishing emails and running scams, it would be one more thing to add to the pile.

[go to top]