zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. seekno+i8[view] [source] 2023-11-20 20:06:30
>>koie+(OP)
It can also "correct" proper reasoning. :)

~"When told where it's wrong, LLM can correct itself to improve accuracy."

Similar to cheating in chess- a master only needs to be told the value of a few positions to have an advantage.

◧◩
2. tines+Cc[view] [source] 2023-11-20 20:21:27
>>seekno+i8
This is said in the abstract as well:

> recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023)

[go to top]