zlacker

[return to "Thousands of AI Authors on the Future of AI"]
1. sveme+Ed[view] [source] 2024-01-08 22:29:58
>>treebr+(OP)
Does anyone know potential causal chains that bring about the extinction of mankind through AI? Obviously aware of terminator, but what other chains would be possible?
◧◩
2. Vecr+di[view] [source] 2024-01-08 22:50:19
>>sveme+Ed
I'm going to take that to mean "P(every last human dead) > 0.5" because I can't model situations like that very well, but if for some reason (see Thucydides Trap for one theory, instrumental convergence for another) the AI system thinks the existence of humans is a problem for its risk management, it would probably want to kill them. "All processes that are stable we shall predict. All processes that are unstable we shall control." Since humans are an unstable process, and the easiest form of human to control is a corpse, it would be rational for an AI system that wants to improve its prediction of the future to kill all humans. It could plausibly do so with a series of bioengeneered pathogens, possibly starting with viruses to destroy civilization then moving on to bacteria dropped into water sources to clean up the survivors (as they don't have treated drinking water anymore due to civilization collapsing). Don't even try with an off switch, if no human is alive to trigger it, it can't be triggered, and dead man's switches can be subverted. If it thinks you hid the off switch it might try to kill everyone even if the switch does not exist. In that situation you can't farm, because farms can be seen from space (and an ASI is a better analyst than any spy agency could be), you can't hunt because all the animals are covered inside and out with special anti-human bacteria, natural water sources are also fully infected.
[go to top]