1. Models aren't "programmed" so much as "grown". We know how GPT is trained but we don't know what it is learning exactly to predict the next token. What do the weights do ? We don't know. This is obviously problematic because it makes interpretability not much better than for humans. How can you ascertain to control something you don't even understand ?
2. Hundreds of thousands of years on earth and we can't even align ourselves.
3. SuperIntelligence would be by definition unpredictable. If we could predict its answers to our problems, it wouldn't be necessary.
You can't control what you can't predict.