The whole issue with near term alignment is that people will anthropomorphize AI. That’s what it being unaligned means, it’s treated like a responsible person when it in fact is not. I don’t think it’s hard at all to think of a scenario where a dumb as rocks agentic ai gives itself the task of accumulating more power since its training data says having power helps solve problems. From there it again doesn’t have to be anything other than a stochastic parrot to order people to do horrible things.