> It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.
This is hugely problematic. If you get the premises wrong, many fallacies will follow.
LLMs can play many roles around this area, but their output cannot be trusted with significant verification and validation.