That seems like pretty small potatoes compared to how much has been spent on LLMs these days.
Or to put it another way: if global funding for LLM development had been capped at $200m, how many of them would even exist?
That came out in 2009, correct? I wonder how much was spent on LLMs up to that point.
> In contrast, Yuxi goes through the handful of described Cyc use-cases from their entire history, and it's not impressive.
They're also not humble. Maintain a semantic database of terrorist cells? Create a tutoring AI? These seem closer to the things that LLMs are currently being used for, with middling success, after vastly more money has been pumped into the field.
Whereas most of the uses you describe for early LLMs are far more humble (spelling error detection & correction, text compressors), and also a lot more successful.
Which makes me think that CYC went first for the big targets, and fell on its face, rather than spending a few decades building up more modest accomplishments. In hindsight that would have obviously been a much better strategy, but honestly—it feels like that would have been an obviously better strategy in non-hindsight as well. I don't know why CYC went that way.