>>notaha+35
It would have to prepare to survive a very large number of contingencies (preferably in secret) and then execute fait accompli with high tolerance to world model perturbations. It might find some other way to become independent from humans (I'm not a giga-doomer like Big Yud, ~13% instead of >99%, though I think he overstates it for (human) risk management reasons), but the probability is way too high to risk it. If a 1% chance of an asteroid (or more likely a comet, coming in from "behind" the sun) killing everyone is not worth it, neither is that same percentage for an AGI/ASI. I don't see the claimed upside unlike a lot of people, so it's just not worth it on cost/benefit.
Edit: it's usually described as a single entity, because barring really out-there decision theory ideas, they're more of a risk to each other than humans are to them. It's not "well, if instrumental convergence is right, and they can't figure out morality (i.e. orthogonality thesis)", it's "almost certain conflict predicted".