Of interest: https://en.wikipedia.org/wiki/Neats_vs._scruffies
I find it interesting because Minsky did a lot of the foundational work in Neural Network research yet he philosophically identified as the opposite on the Neat/Scruffy spectrum of most NN researchers today. Much like Bayes, I think there is some immense wisdom from his research that will not even be acknowledged as wisdom for decades.
EDIT: You can get a little intro to his thoughts on the matter starting about 27:16 in this video [1] (linked at the time marker). If you watch for about 10 minutes, he demonstrates some of the difficulties of using single abstractions for something as complex as human intelligence.