zlacker

[return to "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]
1. ivape+qw[view] [source] 2025-06-06 22:16:48
>>amrrs+(OP)
This is easily explained by accepting that there is no such thing as LRMs. LRMs are just LLMs that iterate on its own answers more (or provides itself more context information of a certain type). The reasoning loop on an "LRM" will be equivalent to asking a regular LLM to "refine" its own response, or "consider" additional context of a certain type. There is no such thing as reasoning basically, as it was always a method to "fix" hallucinations or provide more context automatically, nothing else. These big companies baked in one of the hackiest prompt engineering tricks that your typical enthusiast figured out long ago and managed to brand it and profit off it. The craziest part about this was Deepseek was able to cause a multi billion dollar drop and pump of AI stocks with this one trick. Crazy times.
◧◩
2. AlienR+jH[view] [source] 2025-06-07 00:01:56
>>ivape+qw
Is that what "reasoning" means? That sounds pretty ridiculous.

I've thought before that AI is as "intelligent" as your smartphone is "smart," but I didn't think "reasoning" would be just another buzzword.

◧◩◪
3. ngneer+ZW[view] [source] 2025-06-07 03:34:10
>>AlienR+jH
I am not too familiar with the latest hype, but "reasoning" has a very straightforward definition in my mind. For example, can the program in question derive new facts from old ones in a logically sound manner. Things like applying modus ponens. (A and A => B) => B. Or, all men are mortal and Socrates is a man, and therefore Socrates is mortal. If the program cannot deduce new facts, then it is not reasoning, at least not by my definition.
◧◩◪◨
4. dist-e+og1[view] [source] 2025-06-07 09:06:49
>>ngneer+ZW
When people say LLMs can't do X, I like to try it.

    Q: Complete 3 by generating new knowledge:
    1. today is warm
    2. cats likes warm temperatures
    3.
A: Therefore, a cat is likely to be enjoying the weather today.

Q: does the operation to create new knowledge you did have a specific name?

A: ... Deductive Reasoning

Q: does the operation also have a Latin name?

A: ... So, to be precise, you used a syllogismus (syllogism) that takes the form of Modus Ponens to make a deductio (deduction).

https://aistudio.google.com/app/prompts/1LbEGRnzTyk-2IDdn53t...

People then say "of course it could do that, it just pattern matched a Logic text book. I meant in a real example, not an artificially constructed one like this one. In a complex scenario LLMs obviously can't do Modus Ponens.

[go to top]