(Humans can be badgered into agreeing to discounts and making promises too, but that's why they usually have scripts and more senior humans in the loop)
You probably don't want chatbots leaking their guidelines for how to respond, Sydney style, either (although the answer to that is probably less about protecting from leaking the rest of the prompt and more about not customizing bot behaviour with the prompt)
> You probably don't want chatbots leaking their guidelines for how to respond
It depends. I think it wouldn't be difficult to create a transparent and helpful prompt that would be completely fine even if it was leaked.