I think what really stopped Cyc from getting a wider traction is its closed nature[0]. People do use Princeton WordNet, which you can get for free, even though it's a mess in many aspects. The issue and mentality here is similar to commercial Common Lisp implementations, and the underlying culture is similar (oldschool 80s AI). These projects were shaped with a mindset that major progress in computing will happen with huge government grants and plans[1]. However you interpret the last 30 years, it was not exactly true. It's possible that all these companies earn money for their owners, but they have no industry-wide impact.
I was half-tempted once or twice to use something like Cyc in some project, but it would probably be too much organizational hassle. Especially if it turned out to be something commercial I wouldn't want to be dependent on someone's licensing and financial whims, especially if it can be avoided.
[0] There was OpenCyc for a time, but it was scrapped.
[1] Compare https://news.ycombinator.com/item?id=20569098
[Edit] Here's a wider overview: https://en.wikipedia.org/wiki/Knowledge_representation_and_r...
Wikidata is also worth considering for that task. It is:
* Directly linked from Wikipedia [1]
* The data source for many infoboxes [2]
* Seeded with data from Wikipedia
* More active and integrated in community
* Larger in total number of concepts
Wikidata also has initiatives in lexicographic data [3] and images [4, 5].
On the subject of Cyc: the CycL "generalization" (#$genls) predicate inspired Wikidata's "subclass of" property [6], which now links together Wikidata's tree of knowledge.
---
1. See "Wikidata" link at left in all articles, e.g. https://en.wikipedia.org/wiki/Knowledge_base
2. https://en.wikipedia.org/wiki/Category:Infobox_templates_usi...
3. https://www.wikidata.org/wiki/Wikidata:Lexicographical_data/...
4. https://www.wikidata.org/wiki/Wikidata:Wikimedia_Commons/Dev...
5. See "Structured data" tab in image details on Wikimedia Commons, e.g. https://commons.wikimedia.org/wiki/File:Mona_Lisa,_by_Leonar...
6. https://www.wikidata.org/wiki/Property_talk:P279#Archived_cr...
That was my motivation for writing Hode[1], the Higher-Order Data Editor. It lets you represent arbitrarily nested relationships, of any arity (number of members). It lets you cursor around data to view neighboring data, and it offers a query language that is, I believe, as close as possible to ordinary natural language.
(Hode has no inference engine, and I don't call it an AI project -- but it seems relevant enough to warrant a plug.)
My own Hode, described in an earlier comment[2], makes it easy for anyone who speaks some natural language to enter and query arbitrary structured data.
[1] https://en.wikipedia.org/wiki/Attempto_Controlled_English
This type of thing usually comes through unplanned breakthroughs. You can't discover that the earth revolves around the sun just by paying tons of money to researchers and asking them to figure out astronomy. All that would get you would be some extremely sophisticated Copernican cycle-based models.
Of course, I don't recall them mentioning any of the more dystopian things it could be (and sounds like has been) used for :/.
* https://news.ycombinator.com/item?id=21784105
On second thought, it might have been an Alan Kay presentation. I couldn't find that either but looking I did find this interesting Wired article from 2016:
https://www.wired.com/2016/03/doug-lenat-artificial-intellig...
2) How far we're from real self-evolving cognitive architectures with self-awareness features? Is it a question of years, months, or it's already solved problem?
3) Does it make sense to use embeddings like https://github.com/facebookresearch/PyTorch-BigGraph to achieve better results?
4) Why Cycorp decided to limit communication and collaboration with scientific community / AI-enthusiasts at some point?
5) Did you try to solve GLUE / SUPERGLUE / SQUAD challenges with your system?
6) Is Douglas Lenat still contribute actively to the project?
Thanks