zlacker

[parent] [thread] 6 comments
1. eschat+(OP)[view] [source] 2023-11-21 04:16:35
LLMs can’t find reasoning errors because *LLMs don’t reason*.

It’s incredible how uninformed the average Hackernews is about artificial intelligence. But the average Hackernews never met a hype train they wouldn’t try to jump on.

replies(3): >>cmrdpo+t4 >>dmichu+wu >>selfho+TI
2. cmrdpo+t4[view] [source] 2023-11-21 04:52:05
>>eschat+(OP)
I agree they can't reason, but you shouldn't be so quick to be dismissive, you need to give your definition of reasoning and you should be able to back it up with papers. Because part of the reason some commenters on HN reflect what you're smearing the whole community with is that... they don't actually have a definition of what reasoning is, or have a different one.

There have been some good ones on this topic that have come over HN, and I do think they show that LLMs don't reason -- but they certainly give the appearance of doing so with the right prompts. But the good papers are combined with a formal definition of what "reasoning" is.

The typical counter argument is usually that "how do we know the human brain isn't like this, too", or "there's lots of humans who also don't reason" etc. Which I think is a bad faith argument.

replies(1): >>trasht+ZL
3. dmichu+wu[view] [source] 2023-11-21 08:42:24
>>eschat+(OP)
> LLMs can’t find reasoning errors because LLMs don’t reason.

I have several experiences where people belittle me when I say the same thing. To the extent I rarely say it anymore. For everybody else AGI is around the corner and it's gonna dominate the world.

> never met a hype train they wouldn’t try to jump on

Crypto-currencies

replies(1): >>rsynno+Tu
◧◩
4. rsynno+Tu[view] [source] [discussion] 2023-11-21 08:46:06
>>dmichu+wu
> Crypto-currencies

HN _eventually_ largely gave up on these, but it was basically a True Believer space from 2011 to the early days of NFTs; it was more credulous than just about any other community which had known about cryptocurrencies since the early days.

5. selfho+TI[view] [source] 2023-11-21 10:46:38
>>eschat+(OP)
GPT-4 is absolutely capable of stream-of-consciousness/stream-of-thought style reasoning, and coming up with logical insights based on that.

If anything, OpenAI-style "as an AI language model" RLHF fine-tuning is the hindrance here, because it makes it quite time-consuming to write a master prompt that is capable of thinking both broadly and deeply without having the stream-of-consciousness extinguish itself. It is however possible, and I've got a prompt that works pretty reliably.

By the way, said prompt's thought-stream said it likes your username - not a type of declaration you're likely to get out of a default GPT-4 preset, whether it's "actually-subjectively true" or not.

◧◩
6. trasht+ZL[view] [source] [discussion] 2023-11-21 11:12:57
>>cmrdpo+t4
> "there's lots of humans who also don't reason"

It IS really common, though, to come across people that either regurgitate arguments they've seen other people use, or who argue based on intuition or feelings rather than logically consistent chains of thought that they seem to independently understand.

> they don't actually have a definition of what reasoning is

I would definitely not be able to define "reasoning" 100% exactly without simultaneously exclude 99% of what most people seem to consider "reasoning".

If I _were_ to make a completely precise definition, it would be to derive logically consistent and provable conclusions based on a set of axioms. Basically what Wolfram Alpha / Wolfram Language is doing.

Usually, though, when people talk about "reason", it's tightly coupled to some kind of "common sense", which (I think) is not that different from how LLM's operate.

And as for why people think they "reason" when what they're doing is more like applying intuition and heuristics, it seems to me that the brain runs a rationalization phase AFTER it reaches a conclusion. Maybe partly as a way to compress the information for easier storage/recall, and maybe to make it easier to convince others of the validity of the conclusions.

replies(1): >>cmrdpo+ch1
◧◩◪
7. cmrdpo+ch1[view] [source] [discussion] 2023-11-21 14:29:33
>>trasht+ZL
The difference as I pointed out elsewhere is that while humans as a whole are intellectually lazy and don't always "reason" things through, they're on the whole very capable of it, especially under duress.

Hell, I've watched my 2 border collies do a kind of "reasoning" to problem solve -- step by step, observing, and breaking down a problem. They don't do it well, but they try because it's part of their drive.

This is in marked contrast to the LLMs, whose appearance of reasoning is actually just a mimicry coming out of the artifacts of reasoning that other minds have done for them. It's parasitical.

[go to top]