zlacker

[parent] [thread] 2 comments
1. PaulDa+(OP)[view] [source] 2024-10-20 02:00:02
> It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks

If "general cognitive tasks" means "I give you a prompt in some form, and you give me an incredible response of some form " (forms may differ or be the same) then it is hard to disagree with you.

But if by "general cognitive task" you mean "all the cognitive things that human do", then it is really hard to see why you would have any confidence that LLMs have any hope of achieving superhuman performance at these things.

replies(1): >>jhrmnn+hb
2. jhrmnn+hb[view] [source] 2024-10-20 05:12:01
>>PaulDa+(OP)
Even in cognitive tasks expressed via language, something like a memory feels necessary. At which point it’s not a LLM as in a generic language model. It would become a language model conditioned on the memory state.
replies(1): >>ddingu+Wm
◧◩
3. ddingu+Wm[view] [source] [discussion] 2024-10-20 08:00:41
>>jhrmnn+hb
More than a memory.

Needs to be a closed loop, running on its own.

We get its attention, and it responds, or frankly if we did manage any sort of sentience, even a simulation of it, then the fact is it may not respond.

To me, that is the real test.

[go to top]