zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. eschat+zw1[view] [source] 2023-11-21 04:16:35
>>koie+(OP)
LLMs can’t find reasoning errors because *LLMs don’t reason*.

It’s incredible how uninformed the average Hackernews is about artificial intelligence. But the average Hackernews never met a hype train they wouldn’t try to jump on.

◧◩
2. selfho+sf2[view] [source] 2023-11-21 10:46:38
>>eschat+zw1
GPT-4 is absolutely capable of stream-of-consciousness/stream-of-thought style reasoning, and coming up with logical insights based on that.

If anything, OpenAI-style "as an AI language model" RLHF fine-tuning is the hindrance here, because it makes it quite time-consuming to write a master prompt that is capable of thinking both broadly and deeply without having the stream-of-consciousness extinguish itself. It is however possible, and I've got a prompt that works pretty reliably.

By the way, said prompt's thought-stream said it likes your username - not a type of declaration you're likely to get out of a default GPT-4 preset, whether it's "actually-subjectively true" or not.

[go to top]