zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. nextwo+Ho[view] [source] 2023-11-20 21:10:03
>>koie+(OP)
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
◧◩
2. ming03+xB[view] [source] 2023-11-20 22:09:26
>>nextwo+Ho
As the paper suggested, the LLM cannot identify their own mistakes yet though. And they can only fix their mistakes if the mistake location is given.
◧◩◪
3. Hitton+Xz1[view] [source] 2023-11-21 04:42:55
>>ming03+xB
They would fix a "mistake" even if they were given location where there is none.
[go to top]