zlacker

[parent] [thread] 6 comments
1. DrSiem+(OP)[view] [source] 2023-11-20 19:26:28
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.

replies(4): >>tempes+M6 >>hackin+c7 >>Kevin0+8z >>astran+rQ
2. tempes+M6[view] [source] 2023-11-20 19:52:53
>>DrSiem+(OP)
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
replies(1): >>westur+Xo
3. hackin+c7[view] [source] 2023-11-20 19:54:04
>>DrSiem+(OP)
Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?
replies(1): >>Freeby+uo1
◧◩
4. westur+Xo[view] [source] [discussion] 2023-11-20 21:00:55
>>tempes+M6
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" >>38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486

5. Kevin0+8z[view] [source] 2023-11-20 21:44:17
>>DrSiem+(OP)
Or maybe the intelligence is in language and cannot be dissociated from it.
6. astran+rQ[view] [source] 2023-11-20 23:17:32
>>DrSiem+(OP)
There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.

Especially since you have to explain how "just mimicking" works so well.

◧◩
7. Freeby+uo1[view] [source] [discussion] 2023-11-21 03:02:55
>>hackin+c7
As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.

I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.

[go to top]