So with a safe prompt there is always a chance the AI will go on a bad direction and then refuse to work, and make you pay for the tokens of his "I am sorry ....long speech"
imagine this issue when you are just the devloper and not the user, the user complains about this but you try and works for you, but then it fails again for user, in my case the word "monkey" might trigger ChatGPT to either create soem racist shit or it's moderation code to false flag itself.