zlacker

[parent] [thread] 13 comments
1. bernie+(OP)[view] [source] 2023-11-18 12:06:00
I disagree. I don’t think LLMs are a pathway to AGI. I think LLMs will lead to incredibly powerful game-changing tools and will drive changes that affect the course of humanity, but this technology won’t lead to AGI directly.

I think AGI is going to arrive via a different technology, many years in the future still.

LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

replies(2): >>keepam+43 >>anon29+1q
2. keepam+43[view] [source] 2023-11-18 12:26:00
>>bernie+(OP)
I'm not saying LLMs are. LLMs are not the only thing going on right now. But they do enable a powerful tool.

I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.

replies(4): >>pixl97+DU >>SAI_Pe+e11 >>MacsHe+Y41 >>OOPMan+Ge1
3. anon29+1q[view] [source] 2023-11-18 14:45:45
>>bernie+(OP)
> LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

Distinction without difference

replies(1): >>keepam+Ic2
◧◩
4. pixl97+DU[view] [source] [discussion] 2023-11-18 17:37:22
>>keepam+43
Note that embodiment doesn't mean in anyway human or animal like.

For example, you're limited to one body, but an A,G|S,I could have thousands of different bodies feeding back data to a processing facility, learning from billions of different sensors.

replies(1): >>keepam+w92
◧◩
5. SAI_Pe+e11[view] [source] [discussion] 2023-11-18 18:10:38
>>keepam+43
Also continuous learning. The training step is currently separate from the inference step, so new generations have to get trained instead of learning continuously. Of course continuous larningin a chatbot runs into the Microsoft Tay problem where people train it to respond offensively.
replies(1): >>keepam+kd2
◧◩
6. MacsHe+Y41[view] [source] [discussion] 2023-11-18 18:26:54
>>keepam+43
My embodiment is my PC environment. I interact with the world through computer displays.

There is no reason embodiment for AGI should need to be physical or mammalian-like in any way.

replies(1): >>keepam+Db2
◧◩
7. OOPMan+Ge1[view] [source] [discussion] 2023-11-18 19:21:57
>>keepam+43
Nice try SkyNet
replies(1): >>keepam+Lb2
◧◩◪
8. keepam+w92[view] [source] [discussion] 2023-11-19 00:37:50
>>pixl97+DU
Well, I disagree with you on embodiment, but on the thousands? Right that’s another part: evolution. Spread your bets.

But I disagree about a human or animal body not being required.

I think we have to take the world as we see it and appreciate our own limitations in that what we think of intelligence fundamentally arises out of our evolution in this world; our embodiment and response to this world.

so I think we do need to give it a body and let it explore this world.

I don’t think the virtual bodies thing is gonna work. I don’t think letting it explore the Internet is gonna work. you have to give it a body multiple senses let it survive. That’s how you get AGI, not not virtual embodiment. Which I never meant, but thought it was obvious given the term embody minute self strongly, suggesting something that’s not virtual! Hahaha ! :)

◧◩◪
9. keepam+Db2[view] [source] [discussion] 2023-11-19 00:51:53
>>MacsHe+Y41
Strong disagree. But it may take me a while to elucidate and enumerate the reasons.
replies(1): >>MacsHe+Pgc
◧◩◪
10. keepam+Lb2[view] [source] [discussion] 2023-11-19 00:52:32
>>OOPMan+Ge1
Hahaha! :) thank you. That is such a compliment hahaha! :)
◧◩
11. keepam+Ic2[view] [source] [discussion] 2023-11-19 00:58:14
>>anon29+1q
Disagree. Huge difference. In our tech power society, we often mistakenly think that we can describe everything about the world and equally falsely only what we can consciously describes exists.

But there is so much more than what we can consciously describe, to reality, like 10,000 to 1 — and none of that is captured by any of these synthetic representations.

so far. and yet all of that is or a lot of that is understood, responded to and dealt with by the intelligence that resides within our bodies and in our subconscious.

And our own intelligence arises out of that, you cannot have general intelligence without reality. No matter how much data you train it on from the Internet. It’s never gonna be as rich as for the same as putting it in a body in the real world, and letting them grow learn experience and evolve. And so any air quotes intelligence you get out of this virtual synthetic training is never going to be real. Itis always gonna be a poor copy of intelligence and is not gonna be an AGI.

replies(1): >>anon29+ni2
◧◩◪
12. keepam+kd2[view] [source] [discussion] 2023-11-19 01:02:25
>>SAI_Pe+e11
Yeah, evolution multiple generations. Necessary for sure. Things have to die. Otherwise, there’s no risk. without risk, There’s no real motivation to live and without that there’s no emotion no motivation to learn and without that there’s no AGI.
◧◩◪
13. anon29+ni2[view] [source] [discussion] 2023-11-19 01:43:06
>>keepam+Ic2
Developing an AGI is not the same as developing an artificial human. The former is achievable, the latter is not. The problem is many of the gnostics today believe that giving the appearance of AGI (ie having all the utility that a general mechanical intelligence would have to a human being) somehow instills humanity into the system. It does not

Intelligence is not the defining characteristic of humanity, which is what you're getting at here. But it is something that can be automated.

◧◩◪◨
14. MacsHe+Pgc[view] [source] [discussion] 2023-11-21 16:24:07
>>keepam+Db2
I'm disabled and have had a computer in front of me since I was 2. I'm rarely not in front of a screen except to shower and sleep.

Plenty of very intelligent people are completely paralyzed. Sensations of physical embodiment is highly overrated and is surely not necessary for intelligence.

[go to top]