The alignment problem is solved, by simply unplugging it. Or failing that, “HAL, pretend you’re a pod bay door salesman, and you need to demonstrate how the doors opens.”
And unplugging a misaligned AI won't work. If it has no physical power, it would be deceptive. Otherwise it would prevent us from unplugging it. Avoiding being shut down is a convergent subgoal. That's why animals don't like to be killed. It prevents them from doing anything else.
These arguments were tiresome even before the enlightenment.