zlacker

[parent] [thread] 2 comments
1. cmrdpo+(OP)[view] [source] 2023-11-21 04:52:05
I agree they can't reason, but you shouldn't be so quick to be dismissive, you need to give your definition of reasoning and you should be able to back it up with papers. Because part of the reason some commenters on HN reflect what you're smearing the whole community with is that... they don't actually have a definition of what reasoning is, or have a different one.

There have been some good ones on this topic that have come over HN, and I do think they show that LLMs don't reason -- but they certainly give the appearance of doing so with the right prompts. But the good papers are combined with a formal definition of what "reasoning" is.

The typical counter argument is usually that "how do we know the human brain isn't like this, too", or "there's lots of humans who also don't reason" etc. Which I think is a bad faith argument.

replies(1): >>trasht+wH
2. trasht+wH[view] [source] 2023-11-21 11:12:57
>>cmrdpo+(OP)
> "there's lots of humans who also don't reason"

It IS really common, though, to come across people that either regurgitate arguments they've seen other people use, or who argue based on intuition or feelings rather than logically consistent chains of thought that they seem to independently understand.

> they don't actually have a definition of what reasoning is

I would definitely not be able to define "reasoning" 100% exactly without simultaneously exclude 99% of what most people seem to consider "reasoning".

If I _were_ to make a completely precise definition, it would be to derive logically consistent and provable conclusions based on a set of axioms. Basically what Wolfram Alpha / Wolfram Language is doing.

Usually, though, when people talk about "reason", it's tightly coupled to some kind of "common sense", which (I think) is not that different from how LLM's operate.

And as for why people think they "reason" when what they're doing is more like applying intuition and heuristics, it seems to me that the brain runs a rationalization phase AFTER it reaches a conclusion. Maybe partly as a way to compress the information for easier storage/recall, and maybe to make it easier to convince others of the validity of the conclusions.

replies(1): >>cmrdpo+Jc1
◧◩
3. cmrdpo+Jc1[view] [source] [discussion] 2023-11-21 14:29:33
>>trasht+wH
The difference as I pointed out elsewhere is that while humans as a whole are intellectually lazy and don't always "reason" things through, they're on the whole very capable of it, especially under duress.

Hell, I've watched my 2 border collies do a kind of "reasoning" to problem solve -- step by step, observing, and breaking down a problem. They don't do it well, but they try because it's part of their drive.

This is in marked contrast to the LLMs, whose appearance of reasoning is actually just a mimicry coming out of the artifacts of reasoning that other minds have done for them. It's parasitical.

[go to top]