Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.
An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.
If we have a superhuman AI, we can run down the powerplants for a few days.
Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.
Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).
If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.