zlacker

[parent] [thread] 0 comments
1. Footke+(OP)[view] [source] 2024-05-17 20:04:41
The flipside: it's equally hard for people who assume AI is safe to establish empirical criteria for safety and behavior. Neither side of the argument has a strong empirical basis, because we know of no precedent for an event like the rise of non-biological intelligence.

If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.

[go to top]