zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. einpok+xj[view] [source] 2023-11-20 20:49:27
>>koie+(OP)
No, they can't "correct reasoning errors", and that's a clickbait title.
◧◩
2. ming03+0A[view] [source] 2023-11-20 22:00:09
>>einpok+xj
If you look at the paper, they only claim LLM can correct the errors if the mistake location is given. And the mistake finding part is not yet solved.
◧◩◪
3. einpok+dS[view] [source] 2023-11-20 23:42:45
>>ming03+0A
They don't correct errors even then. They just generate something which sounds like what one might say in a conversation when constrained not to express the error. If there's essentially just one option for that, its the correct one - but then it's like telling someone that the answer to a yes/no question is not the one they generated. Not much "error correction" to do then.
[go to top]