Now it's clear that knowledge graphs are far inferior to deep neural nets, but even still few people can explain the _root_ reason why.
I don't think Lenat's bet was a waste. I think it was sensible based on the information at the time.
The decision to research it largely in secret, closed source, I think was a mistake.
If that is so then symbolic AI does not easily scale because you cannot feed inconsistent information into it. Compare this to how humans and LLMs learn, they both have no problem with inconsistent information. Yet statistically speaking humans can easily produce "useful" information.