zlacker

[parent] [thread] 14 comments
1. candid+(OP)[view] [source] 2023-05-16 15:11:36
I'm sad that we've lost the battle with calling these things AI. LLMs aren't AI, and I don't think they're even a path towards AI.
replies(4): >>a13o+P7 >>vi2837+X8 >>shawab+vl >>Vox_Le+Qw
2. a13o+P7[view] [source] 2023-05-16 15:43:32
>>candid+(OP)
I started at this perspective, but nobody could agree on the definition of the A, or the I; and also the G. So it wasn't a really rigorous technical term to begin with.

Now that it's been corraled by sci-fi and marketers, we are free to come up with new metaphors for algorithms that reliably replace human effort. Metaphors which don't smuggle in all our ignorance about intelligence and personhood. I ended up feeling pretty happy about that.

replies(2): >>kelsey+we >>causi+FC
3. vi2837+X8[view] [source] 2023-05-16 15:47:46
>>candid+(OP)
Yeah, what is named AI for now, is not AI at all.
replies(2): >>Robotb+pd >>mindcr+rH
◧◩
4. Robotb+pd[view] [source] [discussion] 2023-05-16 16:03:54
>>vi2837+X8
AI doesn’t imply it’s general intelligence.
◧◩
5. kelsey+we[view] [source] [discussion] 2023-05-16 16:07:20
>>a13o+P7
I've come to the same conclusion. AGI(and each separately) is better understood as a epistemological problem in the domain of social ontology rather than a category bestowable by AI/ML practitioners.

The reality is that our labeling of something as artificial, general, or intelligent is better understood as a social fact than a scientific fact - even if purely the role of operationalization of each of these is a free parameter in their respective groundings which makes it near useless when taking them as "scientifically" measurably qualities. Any scientist who assumes an operationalization without admitting such isn't doing science - they may as well be astrology at that point.

6. shawab+vl[view] [source] 2023-05-16 16:35:37
>>candid+(OP)
If LLMs aren't AI nothing else is AI so far either

What exactly does AI mean to you?

replies(1): >>brkebd+Pr
◧◩
7. brkebd+Pr[view] [source] [discussion] 2023-05-16 17:01:43
>>shawab+vl
thanks for exemplifying the problem.

intelligence is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow. AI will need to have a concept of paper and yellow. and the to be verb. LLMs just mash samples and form a basic map of what can be throw in one bucket or another with no concept of anything or understanding.

basically, AI is someone capable of minimal criticism. LLMs are someone who just sit in front of the tv and have knee jerk reactions without an ounce of analytical though. qed.

replies(5): >>logdap+Hx >>hammyh+Wz >>mindcr+XH >>shawab+hN >>jamesh+mN
8. Vox_Le+Qw[view] [source] 2023-05-16 17:22:58
>>candid+(OP)
>>I'm sad that we've lost the battle with calling these things AI. LLMs aren't AI, and I don't think they're even a path towards AI.

Ditto the sentiments. What about other machine learning modalities, like image detection? Will I need a license for my mask rcnn models?. Maybe it is just me, but the whole thing reeks of control

◧◩◪
9. logdap+Hx[view] [source] [discussion] 2023-05-16 17:27:00
>>brkebd+Pr
> intelligences is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow

That doesn't clarify anything, you've ever only shuffled the confusion around, moved it to 'understand' and 'meaning'. What does it mean to understand yellow? An LLM or another person could tell you things like "Yellow? Why, that's the color of lemons" or give you a dictionary definition, but does that demonstrate 'understanding', whatever that is?

It's all a philosophical quagmire, made all the worse because for some people its a matter of faith that human minds are fundamentally different from anything soulless machines can possibly do. But these aren't important questions anyway for the same reason. Whether or not the machine 'understands' what it means for paper to be yellow, it can still perform tasks that relate to the yellowness of paper. You could ask an LLM to write a coherent poem about yellow paper and it easily can. Whether or not it 'understands' has no real relevance to practical engineering matters.

◧◩◪
10. hammyh+Wz[view] [source] [discussion] 2023-05-16 17:37:07
>>brkebd+Pr
Is what you're describing simply not what people are using the term AGI to loosely describe? An LLM is an AI model is it not? No, it isn't an AGI, no, I don't think LLMs are a path to an AGI, but it's certainly ML, which is objectively a sub-field of AI.
◧◩
11. causi+FC[view] [source] [discussion] 2023-05-16 17:51:43
>>a13o+P7
Whether LLMs will be a base technology to AI, we should remember one thing: logically it's easier to convince a human that a program is sapient than to actually make a program sapient, and further, it's easier still to make a program do spookily-smart things than it is to make a program that can convince a human it is sapient. We're just getting to the slightly-spooky level.
◧◩
12. mindcr+rH[view] [source] [discussion] 2023-05-16 18:17:36
>>vi2837+X8
Something doesn't need to be full human-level general intelligence to be considered as falling under the "AI" rubric. In the past people spoke of "weak AI" versus "strong AI" and/or "narrow AI" vs "wide AI" to reflect the different "levels" of AI. These days the distinction that most people use is "AI" vs "AGI" which you could loosely (very loosely) speaking think of as somewhat analogous to "weak and/or narrow AI" vs "strong, wide AI".
◧◩◪
13. mindcr+XH[view] [source] [discussion] 2023-05-16 18:20:28
>>brkebd+Pr
intelligence is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow.

That's one, out of many, definitions of "intelligence". But there's no particular reason to insist that that is the definition of intelligence in any universal, objective sense. Especially in terms of talking about "artificial intelligence" where plenty of people involved in the field will allow that the goal is not necessarily to exactly replicate human intelligence, but rather simply to achieve behavior that matches "intelligent behavior" regardless of the mechanism behind it.

◧◩◪
14. shawab+hN[view] [source] [discussion] 2023-05-16 18:48:31
>>brkebd+Pr
> basically, AI is someone capable of minimal criticism

That's not the definition of AI or intelligence

You're letting your understanding of how LLMs work bias you. They may be at their core a token autocompleter but they have emergent intelligence

https://en.m.wikipedia.org/wiki/Emergence

◧◩◪
15. jamesh+mN[view] [source] [discussion] 2023-05-16 18:49:08
>>brkebd+Pr
LLMs absolutely have a concept of ‘yellow’ and ‘paper’ and the verb ‘to be’. They are nothing BUT a collection of mappings around language concepts. And their connotative and denotative meanings, their cultural associations, the contexts in which they arise and the things they can and cannot do. It knows that paper’s normally white and that post-it notes are often yellow; it knows that paper can be destroyed by burning or shredding or dissolving in water; it knows paper can be marked and drawn and written on and torn and used to write letters or folded to make origami cranes.

What kind of ‘understanding’ are you looking for?

[go to top]