>>Abraha+o81
ChatGPT (instruction tuned autoregressive language models) indeed already seems quite general (it's good at conversation Turing tests without faking it like ELIZA), even if the absolute intelligence is limited. Level of generality and intelligence is not the same. Something could be quite narrow but very intelligent (AlphaGo) or quite general but dumb overall (small kid, insect).
Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.