LLM's seem to have proven themselves to be more than a one-trick-pony. There is actually some resemblance of reasoning and structuring etc.. No matter if directly within the LLM, or supported by computer code. E.g it can be argued that the latest LLMs like Gemini 2.5 and Claude 4 in fact do complex reasoning.
We have always taken for granted you need intelligence for that, but what if you don't? It would greatly change our view on intelligence and take away one of the main factors that we test for in e.g. animals to define their "intelligence".
They most definitely don't. We attach symbolic meaning to their output because we can map it semantically to the input we gave it. Which is why people are often caught by surprise when these mappings break down.
LLMs can emulate reasoning, but the failure modes show that they don't. We can get them to be coincidentally emulating reasoning well enough long enough to fools us, investors and the media. But doubling down on it hoping that this problem goes away with scale or fine tuning is proving more and more reckless.