zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. seekno+i8[view] [source] 2023-11-20 20:06:30
>>koie+(OP)
It can also "correct" proper reasoning. :)

~"When told where it's wrong, LLM can correct itself to improve accuracy."

Similar to cheating in chess- a master only needs to be told the value of a few positions to have an advantage.

◧◩
2. tines+Cc[view] [source] 2023-11-20 20:21:27
>>seekno+i8
This is said in the abstract as well:

> recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023)

◧◩◪
3. seekno+yk1[view] [source] 2023-11-21 02:51:57
>>tines+Cc
Yeah but in this complex way that really glosses over what's going on here.

Plus, sometimes the corrections aren't accurate. So of course if you tell it where it's wrong, and it gets a second chance, the error rate will be less...

[go to top]