zlacker

[parent] [thread] 0 comments
1. ericja+(OP)[view] [source] 2019-12-14 23:44:16
Thank you for this AMA, it was eye opening and made me think a lot about the organizational/tech debt barriers to creating AGI (or creating an organization that can create AGI).

I'm a ML researcher working on Deep Learning for robotics. I'm skeptical of the symbolic approach by which 1) ontologists manually enter symbolic assertions and 2) the system deduces further things from its existing ontology. My skepticism comes from a position of Slavic pessimism: we don't actually know how to formally define any object, much less ontological relationships between objects. If we let a machine use our garbage ontologies as axioms with which to prove further ontological relationships, the resulting ontology may be completely disjoint from the reality we live in. There must be a forcing function with which reality tells the system that its ontology is incorrect, and a mechanism for unwinding wrong ontologies.

I'm reminded of a quote from the Alien, Covenant movie.

Walter : When one note is off, it eventually destroys the whole symphony, David.

[go to top]