zlacker

[parent] [thread] 2 comments
1. a13o+(OP)[view] [source] 2023-05-16 15:43:32
I started at this perspective, but nobody could agree on the definition of the A, or the I; and also the G. So it wasn't a really rigorous technical term to begin with.

Now that it's been corraled by sci-fi and marketers, we are free to come up with new metaphors for algorithms that reliably replace human effort. Metaphors which don't smuggle in all our ignorance about intelligence and personhood. I ended up feeling pretty happy about that.

replies(2): >>kelsey+H6 >>causi+Qu
2. kelsey+H6[view] [source] 2023-05-16 16:07:20
>>a13o+(OP)
I've come to the same conclusion. AGI(and each separately) is better understood as a epistemological problem in the domain of social ontology rather than a category bestowable by AI/ML practitioners.

The reality is that our labeling of something as artificial, general, or intelligent is better understood as a social fact than a scientific fact - even if purely the role of operationalization of each of these is a free parameter in their respective groundings which makes it near useless when taking them as "scientifically" measurably qualities. Any scientist who assumes an operationalization without admitting such isn't doing science - they may as well be astrology at that point.

3. causi+Qu[view] [source] 2023-05-16 17:51:43
>>a13o+(OP)
Whether LLMs will be a base technology to AI, we should remember one thing: logically it's easier to convince a human that a program is sapient than to actually make a program sapient, and further, it's easier still to make a program do spookily-smart things than it is to make a program that can convince a human it is sapient. We're just getting to the slightly-spooky level.
[go to top]