zlacker

[parent] [thread] 10 comments
1. sveme+(OP)[view] [source] 2024-01-08 22:29:58
Does anyone know potential causal chains that bring about the extinction of mankind through AI? Obviously aware of terminator, but what other chains would be possible?
replies(6): >>kristi+O2 >>Neverm+U2 >>Vecr+z4 >>jetrin+S5 >>__loam+V8 >>walkho+vt2
2. kristi+O2[view] [source] 2024-01-08 22:42:22
>>sveme+(OP)
I'm fascinated by this as well. There's a lot of conjecture around what if, but I'm yet to really hear much about the how.
3. Neverm+U2[view] [source] 2024-01-08 22:42:56
>>sveme+(OP)
Machines won't need the biosphere to survive.

If they accelerate the burning of fossil fuels, extract and process minerals on land and in the ocean without concern for pollution, replace large areas of the natural world with solar panels, etc., the world could rapidly become hostile for large creatures.

An ocean die out as a result of massive deep sea mining would be particularly devastating. It's very hard to contain pollution in the ocean.

Same for lakes. And without clean water things will get bad everywhere.

Ramping up the frequency of space launches a few orders of magnitude into the solar system for further resources could heavily pollute the atmosphere.

Microbes might be fine, and be able to evolve to changes, for much longer.

4. Vecr+z4[view] [source] 2024-01-08 22:50:19
>>sveme+(OP)
I'm going to take that to mean "P(every last human dead) > 0.5" because I can't model situations like that very well, but if for some reason (see Thucydides Trap for one theory, instrumental convergence for another) the AI system thinks the existence of humans is a problem for its risk management, it would probably want to kill them. "All processes that are stable we shall predict. All processes that are unstable we shall control." Since humans are an unstable process, and the easiest form of human to control is a corpse, it would be rational for an AI system that wants to improve its prediction of the future to kill all humans. It could plausibly do so with a series of bioengeneered pathogens, possibly starting with viruses to destroy civilization then moving on to bacteria dropped into water sources to clean up the survivors (as they don't have treated drinking water anymore due to civilization collapsing). Don't even try with an off switch, if no human is alive to trigger it, it can't be triggered, and dead man's switches can be subverted. If it thinks you hid the off switch it might try to kill everyone even if the switch does not exist. In that situation you can't farm, because farms can be seen from space (and an ASI is a better analyst than any spy agency could be), you can't hunt because all the animals are covered inside and out with special anti-human bacteria, natural water sources are also fully infected.
replies(1): >>notaha+C9
5. jetrin+S5[view] [source] 2024-01-08 22:56:05
>>sveme+(OP)
To borrow a phrase from Microsoft's history, "Embrace, Extend, Extinguish." AI proves to be incredibly useful and we welcome it like we welcomed the internet. It becomes deeply embedded in our lives and eventually in our bodies. One day, a generation is born that never experiences a thought that is not augmented by AI. Sometime later a generation is born that is more AI than human. Sometime later, there are no humans.
replies(1): >>daxfoh+eg
6. __loam+V8[view] [source] 2024-01-08 23:10:42
>>sveme+(OP)
One potential line is a general purpose ai like chat gpt that can give instructions on how to produce genetically engineered viral weapons, for example. I find this improbable, but it's possible that a future LLM (or whatever) gets released that has this capability but isn't know to have that capability. Then you might have a bunch of independent actors making novel contagions in their garage.

That would still require a lot of equipment potentially but it's there.

Another possibility would be some kind of rogue agent scenario where the program and hide and distribute itself on many machines, and interact with people to get them to do bad things or give it money. I think someone already demonstrated one of the LLMs doing some kind of social engineering attack somewhere and getting the support agent to let them in. Not hard to imagine some kind of government funded weapon that scales up that kind of attack. Imagine whole social movements, terrorist groups, or religious cults run by an autonomous agent.

◧◩
7. notaha+C9[view] [source] [discussion] 2024-01-08 23:13:46
>>Vecr+z4
If the AGI - which is for some reason always imagined as a singular entity - thinks humans are unpredictable and risky now, just imagine the unpredictability and risk involved in trying to kill all seven billion of us whilst keeping the electricity supply on...
replies(1): >>Vecr+mb
◧◩◪
8. Vecr+mb[view] [source] [discussion] 2024-01-08 23:22:52
>>notaha+C9
It would have to prepare to survive a very large number of contingencies (preferably in secret) and then execute fait accompli with high tolerance to world model perturbations. It might find some other way to become independent from humans (I'm not a giga-doomer like Big Yud, ~13% instead of >99%, though I think he overstates it for (human) risk management reasons), but the probability is way too high to risk it. If a 1% chance of an asteroid (or more likely a comet, coming in from "behind" the sun) killing everyone is not worth it, neither is that same percentage for an AGI/ASI. I don't see the claimed upside unlike a lot of people, so it's just not worth it on cost/benefit. Edit: it's usually described as a single entity, because barring really out-there decision theory ideas, they're more of a risk to each other than humans are to them. It's not "well, if instrumental convergence is right, and they can't figure out morality (i.e. orthogonality thesis)", it's "almost certain conflict predicted".
◧◩
9. daxfoh+eg[view] [source] [discussion] 2024-01-08 23:49:08
>>jetrin+S5
People wouldn't even get a vaccine injection because there were supposedly Bill Gates's microchips in them. Now you expect they'll be flocking to get these because they have Bill Gates's microchips in them?

Actually, I could see that happening.

Maybe it's time to give AGI a chance to run things anyway and see if it can do any better. Certainly it isn't a very high bar.

replies(1): >>reduce+Ls
◧◩◪
10. reduce+Ls[view] [source] [discussion] 2024-01-09 01:19:20
>>daxfoh+eg
It's the same thing with smartphones / general compute. Everyone will eventually find they have to use it to be useful in the modern world.
11. walkho+vt2[view] [source] 2024-01-09 17:15:10
>>sveme+(OP)
This argument gives a 35% chance of AI "taking over" (granted this does not mean extinction) this century: https://www.foxy-scout.com/wwotf-review/#underestimating-ris.... The argument consists of 6 steps, assigning probabilities to each step, and multiplying the probabilities.
[go to top]