zlacker

[parent] [thread] 3 comments
1. guitar+(OP)[view] [source] 2024-05-15 15:29:18
Pure LLM based approach will not lead to AGI, I'm 100% sure. A new research paper has shown [0] that no matter what LLM model is used, it exhibits diminishing returns, when you would be wanting at least a linear curve when looking for AGI.

[0] https://www.youtube.com/watch?v=dDUC-LqVrPU

replies(1): >>sebzim+L2
2. sebzim+L2[view] [source] 2024-05-15 15:41:58
>>guitar+(OP)
Based on the abstract this is about image models not LLMs
replies(1): >>guitar+YH
◧◩
3. guitar+YH[view] [source] [discussion] 2024-05-15 19:02:26
>>sebzim+L2
Ah fair point, should've read it more carefully.

I'm tuning my probabilities back to 99%, I still don't believe just feeding more data to the LLM will do it. But I'll give the chance a possibility.

replies(1): >>DrSiem+gV1
◧◩◪
4. DrSiem+gV1[view] [source] [discussion] 2024-05-16 06:01:31
>>guitar+YH
Obviously feeding more data won't do anything besides increase the knowledge available.

Next steps would be in totally different fields, like implementing actual reasoning, global outline planning and the capacity to evolve after training is done.

[go to top]