zlacker

[parent] [thread] 1 comments
1. mejuto+(OP)[view] [source] 2023-07-06 07:03:31
Counterexample: covid.

Assumptions:

- Genetic modification as danger needs to be in the form of a big number of smart humans (where did that come from?)

- AI is not physically constrained

> it's much more likely we have time to detect and thwart their threats.

Why? Counterexample: covid.

> Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.

Why insist on some superinteligent and human, and suficient number. A simple virus could be a critical danger.

replies(1): >>nopins+jE3
2. nopins+jE3[view] [source] 2023-07-07 02:34:49
>>mejuto+(OP)
We do have regulations and laws to control genetic modifications of pathogens. They are done in highly secure labs and the access is not widely available to anyone.

If a pathogen more deadly than Covid starts to spread, eg like Ebola or Smallpox, we would have done more to limit its spread. If it’s good at hiding from detection for a while, it could potentially cause a catastrophe but most likely will not wipe out humanity because it is not intelligent and some surviving humans will eventually find a way to thwart it or limit its impact.

A pathogen is also physically constrained by available hosts. Yes, current AI also requires processors but it’s extremely hard or nearly impossible to limit contact with CPUs & GPUs in the modern economy.

[go to top]