But also, some of the magic in having good enough pretrained representations is that you don’t need to train them further for downstream tasks, which means non-differentiable tasks like logic could soon become more tenable.
But with the recent advances/demonstrations, it seems more likely today than in 2019 that our current computational resources are sufficient to perform magnificantly spooky stuff if they're used correctly. They are doing that already already, and that's without deliberately making the software do anything except draw from a vast pool of examples.
I think it's reasonable, based on this, to update one's expectations of what we'd be able to do if we figured out ways of doing things that aren't based on first seeing a hundred million examples of what we want the computer to do.
Things that do this can obviously exist, we are living examples. Does figuring it out seem likely to be many decades away?
Like for example the discovery that language models get far better at answering complex questions if asked to show their working step by step with chain of thought reasoning as in page 19 of the PaLM paper [1]. Worth checking out the explanations of novel jokes on page 38 of the same paper. While it is, like you say, all statistics, if it's indistinguishable from valid reasoning, then perhaps it doesn't matter.
I'm an not AGI-skeptic. I'm just a bit skeptical that the topic of this thread is the path forward. It seems to me like an exotic detour.
And, of course intelligence isn't magic. We're producing new intelligent entities at rate of a about ~5 per second globally, every day.
> Does figuring it out seem likely to be many decades away?
1-7?