zlacker
[return to "LLMs cannot find reasoning errors, but can correct them"]
◧
1. nextwo+Ho
[view]
[source]
2023-11-20 21:10:03
>>koie+(OP)
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
◧◩
2. ming03+xB
[view]
[source]
2023-11-20 22:09:26
>>nextwo+Ho
As the paper suggested, the LLM cannot identify their own mistakes yet though. And they can only fix their mistakes if the mistake location is given.
[go to top]