The flipside: it's equally hard for people who assume AI is safe to establish empirical criteria for safety and behavior. Neither side of the argument has a strong empirical basis, because we know of no precedent for an event like the rise of non-biological intelligence.
If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.