Assumptions:
- Genetic modification as danger needs to be in the form of a big number of smart humans (where did that come from?)
- AI is not physically constrained
> it's much more likely we have time to detect and thwart their threats.
Why? Counterexample: covid.
> Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.
Why insist on some superinteligent and human, and suficient number. A simple virus could be a critical danger.
If a pathogen more deadly than Covid starts to spread, eg like Ebola or Smallpox, we would have done more to limit its spread. If it’s good at hiding from detection for a while, it could potentially cause a catastrophe but most likely will not wipe out humanity because it is not intelligent and some surviving humans will eventually find a way to thwart it or limit its impact.
A pathogen is also physically constrained by available hosts. Yes, current AI also requires processors but it’s extremely hard or nearly impossible to limit contact with CPUs & GPUs in the modern economy.