zlacker

[parent] [thread] 0 comments
1. kamran+(OP)[view] [source] 2025-06-07 15:23:03
The two interesting things I learned after reading this paper:

Even when given the exact steps needed to arrive at a solution in the prompt, the reasoning models still require just as many steps to reach a workable solution as they would if they weren’t given the solution in the prompt.

The other thing, which seems obvious in hindsight, but I don’t typically use these reasoning models in my day to day - is that it requires a significant amount of tokens to reach the point where reasoning models outperform non-reasoning models by a significant margin.

[go to top]