Mind you, the risk of "AI" acting on its own is massively exaggerated. It's AI-wielding humans who are the real unalignable threat.
IMO depends where you draw the line between "AI acting on its own" and "person takes AI that shouldn't be left unsupervised, sets it going in an infinite loop, leaves it unsupervised, then it explodes" (so far mostly in small ways and in the face of the person who did it, which is basically fine, but still, where do you draw the line?)