That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?
No.
It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.
> But what happens when immoral people use the system?
Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.