zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. seekno+i8[view] [source] 2023-11-20 20:06:30
>>koie+(OP)
It can also "correct" proper reasoning. :)

~"When told where it's wrong, LLM can correct itself to improve accuracy."

Similar to cheating in chess- a master only needs to be told the value of a few positions to have an advantage.

◧◩
2. mark_l+Tx[view] [source] 2023-11-20 21:49:44
>>seekno+i8
I have noticed this several times. When I give feedback that a mistake was made (with no details on what the mistake is), often smaller and medium size LLMs then give a correct response.
◧◩◪
3. erhaet+sB[view] [source] 2023-11-20 22:08:54
>>mark_l+Tx
Which I take full advantage of when the output is like 90% correct but the "fix" requires a bit of refactoring, I just tell it what I want and presto. Faster than doing it by hand.
[go to top]