The methods behind the different scenarios - disinformation, false-flagging, impersonation, stoking fear, exploiting the tools used to make the decisions - aren't new. States have all the capability to do them right now, without AI. But if a state did so, they would face annihilation if anyone found out what they were doing. And the manpower needed to run a large-scale disinformation campaign means a leak is pretty likely. So it's not worth it.
But, with AI, a small terrorist group could do it. And it'd be hard to know which ones were planning to, because they'd only need to buy the same hardware as any other small tech company.
(I hope I've summarized the article well enough.)
Like what happened to China after they released Tiktok, or what happened to Russia after they used their troll farms to affect public sentiment surrounding US elections?
"Flooding social media" isn't something difficult to do right now, with far below state-level resources. AIs don't come with built-in magical account-creation tools nor magical rate-limiter-removal tools. What changes with AI is the quality of the message that's crafted, nothing more.
No military uses tweets to determine if it has been nuked. AI doesn't provide a new vector to cause a nuclear war.