zlacker

[parent] [thread] 4 comments
1. Aperoc+(OP)[view] [source] 2024-05-17 17:51:42
That's an over complication, how about my naive belief that LLMs (and increasing the size of) don't lead to AGI.

Not saying AGI is impossible, just think the large models and the underlying statistic model beneath are not the path.

replies(3): >>mitthr+L3 >>Footke+li >>jonono+Fk1
2. mitthr+L3[view] [source] 2024-05-17 18:16:41
>>Aperoc+(OP)
I think they aren't the full answer, no matter how much they're scaled up. But they may be one essential element of a working solution, and perhaps one or two brilliant insights away. I also think that some of the money being invested into the LLM craze will be directed into the search for those other brilliant insights.
3. Footke+li[view] [source] 2024-05-17 19:54:06
>>Aperoc+(OP)
LLMS don't have to be smart enough to be AGI. They just have to be smart enough to create AGI. And if creating something smarter than yourself sounds crazy, remember that we were created by simpler ancestors that we now effortlessly dominate.
replies(1): >>Aperoc+jz
◧◩
4. Aperoc+jz[view] [source] [discussion] 2024-05-17 22:18:52
>>Footke+li
I don't disagree with the general notion, but it seem to me that LLMs being smart enough to create AGI is even more far fetched than if they are just smart enough to be AGI.
5. jonono+Fk1[view] [source] 2024-05-18 09:27:57
>>Aperoc+(OP)
Many teams are trying to combine their ideas with LLM. Because despite their weaknesses, it seems LLMs (and related concepts such as RLFH, transformers, self-supervised learning, internet-scale datasets), have made some remarkable gains. Those team are coming from the whole spectrum of ML and AI research. And they wish to use their ideas to overcome some of the weaknesses of current day LLMs. Do you also think that none of these children can lead to AGI? Why not?
[go to top]