zlacker

[parent] [thread] 0 comments
1. kelsey+(OP)[view] [source] 2023-05-16 16:07:20
I've come to the same conclusion. AGI(and each separately) is better understood as a epistemological problem in the domain of social ontology rather than a category bestowable by AI/ML practitioners.

The reality is that our labeling of something as artificial, general, or intelligent is better understood as a social fact than a scientific fact - even if purely the role of operationalization of each of these is a free parameter in their respective groundings which makes it near useless when taking them as "scientifically" measurably qualities. Any scientist who assumes an operationalization without admitting such isn't doing science - they may as well be astrology at that point.

[go to top]