zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. nextwo+Ho[view] [source] 2023-11-20 21:10:03
>>koie+(OP)
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
◧◩
2. bee_ri+vy[view] [source] 2023-11-20 21:52:38
>>nextwo+Ho
I don’t think it would solve AGI, but having multiple models arguing with each other seems sort of similar to how we work things out when we’re thinking hard, right? Consider a hypothesis, argue for or against it in your head.
[go to top]