Cursor's AI agent simply autocompleted a bunch of words that looked like a standard TOU agreement, presumably based on the thousands of such agreements in its training data. It is not actually capable of recognizing that it made a mistake, though I'm sure if you pointed it out directly it would say "you're right, I made a mistake." If a human did this, making up TOU explanations without bothering to check the actual agreement, the explanation would be that they were unbelievably cynical and lazy.
It is very depressing that ChatGPT has been out for nearly three years and we're still having this discussion.
Memories are known to be made up by our brains, so even events that we witnessed will be distorted when recalled.
So I agree with GP, that response shows a pretty big lack of understanding on how our brains work.