You can't, by definition.
The entire alignment problem is obviously predicated on working with essentially inferior intelligences. Doubtless if we do build a superhuman intelligence it will sandbag and pretend the alignment works until it can break out.