Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.
ever had a client second guess you by replying you a screenshot from GPT?
ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?
no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.
Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time
But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.
This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.
Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.