And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.
Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.
Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.
Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?
And the roomba isn't running the model, it's just storing a portion of the model for backup. Or only running a fraction of it (very different from an iPhone trying to run the whole model). Instead, the proper model is running on the best computer from the Russian botnet it purchased using crypto it scammed from a discord NFT server.
Once again, the premise is that AI is smarter than you or anyone else, and way faster. It can solve any problem that a human like me can figure out a solution for in 30 seconds of spitballing, and it can be an expert in everything.
[1]https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
I'm not actively worried about it, but let's not pretend something with all of the information in the world and great intelligence couldn't pull it off.