This paper seems to focus on highly algorithmic/puzzle-like problems, which are not the typical application domain of LLMs, using a <500M parameter model. So my hunch is "reasoning" works much better for math, coding, factual recall, and writing tasks that most LLMs actually deal with.