It's not without some flaws, though. The continual harping on how secretive Cyc was doesn't seem fair, given that (with the improbable exception of Meta) none of the currently leading NN-based LLMs have revealed their "source code" (weights) either. Also, I don't agree that Lenat's single-mindedness was a bad thing (I'm glad somebody explored this unpopular concept), nor even that he was necessarily wrong in believing that building a huge, curated fact base is a path to AGI -- the graph showing the exponential increase in the number of assertions in Cyc over time, despite roughly constant headcount, points towards the possibility that a critical threshold was nearby. In fact, the history of NNs is eerily similar: at first found to be promising, then abandoned, then -- against all prevailing wisdom at the time -- found to actually perform very well if you simply scaled them far more than you "should".
There are others, but OLMo is the most recent and competitive.