zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. eschat+zw1[view] [source] 2023-11-21 04:16:35
>>koie+(OP)
LLMs can’t find reasoning errors because *LLMs don’t reason*.

It’s incredible how uninformed the average Hackernews is about artificial intelligence. But the average Hackernews never met a hype train they wouldn’t try to jump on.

◧◩
2. cmrdpo+2B1[view] [source] 2023-11-21 04:52:05
>>eschat+zw1
I agree they can't reason, but you shouldn't be so quick to be dismissive, you need to give your definition of reasoning and you should be able to back it up with papers. Because part of the reason some commenters on HN reflect what you're smearing the whole community with is that... they don't actually have a definition of what reasoning is, or have a different one.

There have been some good ones on this topic that have come over HN, and I do think they show that LLMs don't reason -- but they certainly give the appearance of doing so with the right prompts. But the good papers are combined with a formal definition of what "reasoning" is.

The typical counter argument is usually that "how do we know the human brain isn't like this, too", or "there's lots of humans who also don't reason" etc. Which I think is a bad faith argument.

[go to top]