Now it's clear that knowledge graphs are far inferior to deep neural nets, but even still few people can explain the _root_ reason why.
I don't think Lenat's bet was a waste. I think it was sensible based on the information at the time.
The decision to research it largely in secret, closed source, I think was a mistake.
No. It depends. In general, two technologies can’t be assessed independently of the application.
There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.
This assumes that all classes of problems reduce to functions which can be approximated, right, per the universal approximation theorems?
Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.
But I think a lot of people (including the comment above) misunderstand or misapply the UATs. Think about the assumptions! UATs assume a fixed length input, do they not? This breaks a correspondence with many classes of algorithms.*
## Example
Let's make a DNN that sorts a list of numbers, shall we? But we can't cheat and only have it do pairwise comparisons -- that is not the full sorting problem. We have to input the list of numbers and output the list of sorted numbers. At run-time. With a variable-length list of inputs.
So no single DNN will do! For every input length, we would need a different DNN, would we not? Training this collection of DNNs will be a whole lot of fun! It will make Bitcoin mining look like a poster-child of energy conservation. /s
* Or am I wrong? Is there a theoretical result I don't know about?