zlacker

[parent] [thread] 4 comments
1. nextwo+(OP)[view] [source] 2023-11-20 21:10:03
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
replies(3): >>westur+z >>bee_ri+O9 >>ming03+Qc
2. westur+z[view] [source] 2023-11-20 21:11:55
>>nextwo+(OP)
This is called (Algorithmic) Convergence; does the model stably converge upon one answer which it believes is most correct? After how much resources and time?

Convergence (evolutionary computing) https://en.wikipedia.org/wiki/Convergence_(evolutionary_comp...

Convergence (disambiguation) > Science, technology, and mathematics https://en.wikipedia.org/wiki/Convergence#Science,_technolog...

3. bee_ri+O9[view] [source] 2023-11-20 21:52:38
>>nextwo+(OP)
I don’t think it would solve AGI, but having multiple models arguing with each other seems sort of similar to how we work things out when we’re thinking hard, right? Consider a hypothesis, argue for or against it in your head.
4. ming03+Qc[view] [source] 2023-11-20 22:09:26
>>nextwo+(OP)
As the paper suggested, the LLM cannot identify their own mistakes yet though. And they can only fix their mistakes if the mistake location is given.
replies(1): >>Hitton+gb1
◧◩
5. Hitton+gb1[view] [source] [discussion] 2023-11-21 04:42:55
>>ming03+Qc
They would fix a "mistake" even if they were given location where there is none.
[go to top]