A lot of intelligence is based around "Don't die, also, have babies". Well AI doesn't have that issue as long as it produces a good enough answer we'll keep the power flowing to it.
The bigger issue, and one that is likely far more dangerous, is "Ok, what if it could learn like this". You've created super intelligence with access to huge amounts of global information and is pretty much immortal unless humans decide to pull the plug. The expected alignment of your superintelligent machine is to design a scenario where we cannot unplug the AI without suffering some loss (monetary is always a good one, rich people never like to lose money and will let an AI burn the earth first).
The alignment issue on a potential superintelligence is, I don't believe, a solvable problem. Getting people not to be betraying bastards is hard enough at standard intelligence levels, having a thing that could potentially be way smarter and connected to way more data is not going to be controllable in any form or fashion.