zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. eschat+zw1[view] [source] 2023-11-21 04:16:35
>>koie+(OP)
LLMs can’t find reasoning errors because *LLMs don’t reason*.

It’s incredible how uninformed the average Hackernews is about artificial intelligence. But the average Hackernews never met a hype train they wouldn’t try to jump on.

◧◩
2. cmrdpo+2B1[view] [source] 2023-11-21 04:52:05
>>eschat+zw1
I agree they can't reason, but you shouldn't be so quick to be dismissive, you need to give your definition of reasoning and you should be able to back it up with papers. Because part of the reason some commenters on HN reflect what you're smearing the whole community with is that... they don't actually have a definition of what reasoning is, or have a different one.

There have been some good ones on this topic that have come over HN, and I do think they show that LLMs don't reason -- but they certainly give the appearance of doing so with the right prompts. But the good papers are combined with a formal definition of what "reasoning" is.

The typical counter argument is usually that "how do we know the human brain isn't like this, too", or "there's lots of humans who also don't reason" etc. Which I think is a bad faith argument.

◧◩◪
3. trasht+yi2[view] [source] 2023-11-21 11:12:57
>>cmrdpo+2B1
> "there's lots of humans who also don't reason"

It IS really common, though, to come across people that either regurgitate arguments they've seen other people use, or who argue based on intuition or feelings rather than logically consistent chains of thought that they seem to independently understand.

> they don't actually have a definition of what reasoning is

I would definitely not be able to define "reasoning" 100% exactly without simultaneously exclude 99% of what most people seem to consider "reasoning".

If I _were_ to make a completely precise definition, it would be to derive logically consistent and provable conclusions based on a set of axioms. Basically what Wolfram Alpha / Wolfram Language is doing.

Usually, though, when people talk about "reason", it's tightly coupled to some kind of "common sense", which (I think) is not that different from how LLM's operate.

And as for why people think they "reason" when what they're doing is more like applying intuition and heuristics, it seems to me that the brain runs a rationalization phase AFTER it reaches a conclusion. Maybe partly as a way to compress the information for easier storage/recall, and maybe to make it easier to convince others of the validity of the conclusions.

[go to top]