zlacker

[parent] [thread] 15 comments
1. stewar+(OP)[view] [source] 2024-05-15 15:08:38
How can you be so certain there is 0 chance LLMs lead to AGI/Superintelligence? Asking curiously, not something I've heard prior.
replies(3): >>guhida+o3 >>guitar+s4 >>barlin+j5
2. guhida+o3[view] [source] 2024-05-15 15:23:57
>>stewar+(OP)
I'm 100% certain that I need to do more than just predict the next token to be considered intelligent. Also call me when ChatGPT can manipulate matter.
replies(2): >>soulof+w7 >>mypalm+TS
3. guitar+s4[view] [source] 2024-05-15 15:29:18
>>stewar+(OP)
Pure LLM based approach will not lead to AGI, I'm 100% sure. A new research paper has shown [0] that no matter what LLM model is used, it exhibits diminishing returns, when you would be wanting at least a linear curve when looking for AGI.

[0] https://www.youtube.com/watch?v=dDUC-LqVrPU

replies(1): >>sebzim+d7
4. barlin+j5[view] [source] 2024-05-15 15:33:00
>>stewar+(OP)
LLMs are gigantic curves fitted to civilizational scale datasets. LLM predictions are based on this. A language model is a mathematical construct and can only be as intelligent as that Algebra book sitting on your shelf.
replies(2): >>holler+f6 >>dwaltr+Ht
◧◩
5. holler+f6[view] [source] [discussion] 2024-05-15 15:37:41
>>barlin+j5
>LLMs are gigantic curves fitted to civilizational scale datasets

>A language model is a mathematical construct

That is like telling someone from the Middle Ages that a gun is merely an assemblage of metal parts not too different from the horseshoes and cast-iron nails produced by your village blacksmith and consequently it is safe to give a child a loaded gun.

ADDED. Actually a better response (because it does not rely on an analogy) is to point out that none of the people who are upset over the possibility that most of the benefits of AI might accrue to a few tech titans and billionaires would be in the least bit re-assured by being told that an AI model is just a mathematical construct.

◧◩
6. sebzim+d7[view] [source] [discussion] 2024-05-15 15:41:58
>>guitar+s4
Based on the abstract this is about image models not LLMs
replies(1): >>guitar+qM
◧◩
7. soulof+w7[view] [source] [discussion] 2024-05-15 15:43:31
>>guhida+o3
> Also call me when ChatGPT can manipulate matter.

You mean like PALM-E? https://palm-e.github.io/

Embodiment is the easy part.

◧◩
8. dwaltr+Ht[view] [source] [discussion] 2024-05-15 17:22:11
>>barlin+j5
An algebra book is a collection of paper pages with ink on them. An LLM is... nothing like that at all. LLMs are complex machines that operate on data and produce data. Books are completely static. They don't do anything.

Do you have a better analogy? I'd like to hear more about how ML models can't be intelligent, if you don't mind.

I'm pretty skeptical of the idea that we know enough at this point to make that claim definitively.

replies(1): >>andsoi+Qm1
◧◩◪
9. guitar+qM[view] [source] [discussion] 2024-05-15 19:02:26
>>sebzim+d7
Ah fair point, should've read it more carefully.

I'm tuning my probabilities back to 99%, I still don't believe just feeding more data to the LLM will do it. But I'll give the chance a possibility.

replies(1): >>DrSiem+IZ1
◧◩
10. mypalm+TS[view] [source] [discussion] 2024-05-15 19:36:33
>>guhida+o3
Are you 100% certain that the human brain performs no language processing which is analogous to token prediction?
replies(1): >>stubis+tz1
◧◩◪
11. andsoi+Qm1[view] [source] [discussion] 2024-05-15 22:34:34
>>dwaltr+Ht
> Books are completely static. They don't do anything.

Books (and writing) are a big force in cultural evolution.

replies(1): >>dwaltr+Ww1
◧◩◪◨
12. dwaltr+Ww1[view] [source] [discussion] 2024-05-16 00:09:34
>>andsoi+Qm1
Yes, I love books. They are awesome. But we are talking about machine intelligence, so that's not super relevant.

Books aren't data/info-processing machines, by themselves. LLMs are.

◧◩◪
13. stubis+tz1[view] [source] [discussion] 2024-05-16 00:34:33
>>mypalm+TS
A human brain certainly does do predictions, which is very useful to the bit that makes decisions. But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize? The best it can do is blindly follow the mob, a behavior we consider unintelligent even when done by human brains.
replies(1): >>craken+9T1
◧◩◪◨
14. craken+9T1[view] [source] [discussion] 2024-05-16 04:37:07
>>stubis+tz1
> But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize?

My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.

I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...

replies(1): >>soulof+Uo2
◧◩◪◨
15. DrSiem+IZ1[view] [source] [discussion] 2024-05-16 06:01:31
>>guitar+qM
Obviously feeding more data won't do anything besides increase the knowledge available.

Next steps would be in totally different fields, like implementing actual reasoning, global outline planning and the capacity to evolve after training is done.

◧◩◪◨⬒
16. soulof+Uo2[view] [source] [discussion] 2024-05-16 11:46:35
>>craken+9T1
I'm of the belief that the entire conscious experience is a side effect of the need for us to make rapid predictions when time is of the essence, such as when hunting or fleeing. Otherwise, our subconscious could probably handle most of the work just fine.
[go to top]