1. Recognizing that AI was a scale problem.
2. Understanding that common sense was the core problem to solve.
Although you say Cyc couldn't do common sense reasoning, wasn't that actually a major feature they liked to advertise? IIRC a lot of Cyc demos were various forms of common sense reasoning.
I once played around with OpenCyc back when that was a thing. It was interesting because they'd had to solve a lot of problems that smaller more theoretical systems never did. One of their core features is called microtheories. The idea of a knowledge base is that it's internally consistent and thus can have formal logic be performed on it, but real world knowledge isn't like that. Microtheories let you encode contradictory knowledge about the world, in such a way that they can layer on top of the more consistent foundation.
A very major and fundamental problem with the Cyc approach was that the core algorithms don't scale well to large sizes. Microtheories were also a way to constrain the computational complexity. LLMs work partly because people found ways to make them scale using GPUs. There's no equivalent for Cyc's predicate logic algorithms.
I never got to try it myself, but no doubt it worked fine in those cases where correct inferences could be made based on the knowledge/rules it had! Similarly GPT-4 is extremely impressive when it's not bullshitting!
The brittleness in either case (CYC or LLMs) comes mainly from incomplete knowledge (unknown unknowns), causing an invalid inference which the system has no way to detect and correct. The fix is a closed loop system where incorrect outputs (predictions) are detected - prompting exploration and learning.
I don't know if CYC tried to do it, but one potential speed up for a system of that nature might be chunking, which is a strategy that another GOFAI system, SOAR, used successfully. A bit like using memoization (remembering results of work already done) as a way to optimize dynamic programming solutions.