Regardless, still hilarious and potentially quite scary if the comments are tied to actions
There's not really any doctoring going on, other than basic prompt injection. However, I can imagine someone accidentally tricking ChatGPT into claiming some ridiculously low priced offer without intentional prompt attacks. If you start bargaining with ChatGPT, it'll play along; it's just repeating the patterns in its training data.