One day, the rapid advancement of AI via LLMs will slow down and attention will again return to logical reasoning and knowledge representation as championed by the Cyc Project, Cycorp, its cyclists and Dr. Doug Lenat.
Why? If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.
What's the point of all that data collecting dust and accomplishing not much of anything?
This is hugely problematic. If you get the premises wrong, many fallacies will follow.
LLMs can play many roles around this area, but their output cannot be trusted with significant verification and validation.