The most likely motivation for an AI to decide to wipe out humanity is one that doesn't even have an English word associated with it, except as a faint trace.
In my opinion, this is actually the greatest danger of AIs, one we can already see manifesting in a fairly substantial way with the GPT-line of transformer babble-bots. We can't help but model them as human. They aren't. There's a vast space of intelligent-but-not-even-remotely-human behaviors out there, and we have a collective gigantic blindspot about that because the only human-level intelligences we've ever encountered are humans. For all the wonderful and fascinating diversity of being human, there's also an important sense in which the genius and the profoundly autistic and the normal guy and the whole collection of human intelligence is all just a tiny point in the space of possibilities, barely distinguishable from each other. AIs are not confined to it in the slightest. They already live outside of there by quite a ways and the distance they can diverge from us only grows larger as their capabilities improve.
In fact people like to talk about how alien aliens could be, but even other biological aliens would be confined by the need to survive in the physical universe and operate on it via similar processes in physically-possible environments. AIs don't even have those constraints. AIs can be far more alien then actual biological aliens.