Which we don't.
So we're not aligning it with corporate boards yet, though not for lack of trying.
(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).
So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.
(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)
And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.
This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.
When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.
The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.