If AI did run away and do it's own thing (seems super unlikely) it's probably a crapshoot as to whether what it does is worse than the environmental apocalypse we live in where the rich continue to get richer and the poor poorer.
Which we don't.
So we're not aligning it with corporate boards yet, though not for lack of trying.
(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).
So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.
(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)