From my experience so far, most "AI skeptics" seem to be trying to catch the LLM in an error of reasoning or asking it to turn a vague description into a polished product in one shot. To make the latter worse, they often try to add context after the first wrong answer, which tends to make the LLM continue to be wrong - stop thinking about the pink elephant. No, I said don't think about the pink elephant! Why do you keep mentioning the pink elephant? I said I don't want a pink elephant in the text!