It’s incredible how uninformed the average Hackernews is about artificial intelligence. But the average Hackernews never met a hype train they wouldn’t try to jump on.
There have been some good ones on this topic that have come over HN, and I do think they show that LLMs don't reason -- but they certainly give the appearance of doing so with the right prompts. But the good papers are combined with a formal definition of what "reasoning" is.
The typical counter argument is usually that "how do we know the human brain isn't like this, too", or "there's lots of humans who also don't reason" etc. Which I think is a bad faith argument.
It IS really common, though, to come across people that either regurgitate arguments they've seen other people use, or who argue based on intuition or feelings rather than logically consistent chains of thought that they seem to independently understand.
> they don't actually have a definition of what reasoning is
I would definitely not be able to define "reasoning" 100% exactly without simultaneously exclude 99% of what most people seem to consider "reasoning".
If I _were_ to make a completely precise definition, it would be to derive logically consistent and provable conclusions based on a set of axioms. Basically what Wolfram Alpha / Wolfram Language is doing.
Usually, though, when people talk about "reason", it's tightly coupled to some kind of "common sense", which (I think) is not that different from how LLM's operate.
And as for why people think they "reason" when what they're doing is more like applying intuition and heuristics, it seems to me that the brain runs a rationalization phase AFTER it reaches a conclusion. Maybe partly as a way to compress the information for easier storage/recall, and maybe to make it easier to convince others of the validity of the conclusions.