Good lord we are screwed. And yet somehow I bet even this isn't going to kill off the they're just statistical interpolators meme.
[1] https://www.deepmind.com/blog/tackling-multiple-tasks-with-a...
They’re all fundamentally anthropocentric: people argue until they are blue in the face about what “intelligent” means but it’s always implicit that what they really mean is “how much like me is this other thing”.
Language models, even more so than the vision models that got them funded have empirically demonstrated that knowing the probability of two things being adjacent in some latent space is at the boundary indistinguishable from creating and understanding language.
I think the burden is on the bright hominids with both a reflexive language model and a sex drive to explain their pre-Copernican, unique place in the theory of computation rather than vice versa.
A lot of these problems just aren’t problems anymore if performance on tasks supersedes “consciousness” as the thing we’re studying.
I think it’s in everyone’s benefit if we start planning for a world where a significant portion of the experts are stubbornly wrong about AGI. As a technology, generally intelligent ML has the potential to change so many aspects of our world. The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.
Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".
If the ML-AGI works out in a few years, who gets to enter the prompts?
For image generation, it's obviously all fiction. Which is fine and mostly harmless if you you know what you're getting. It's going to leak out onto the Internet, though, and there will be photos that get passed around as real.
For text, it's all fiction too, but this isn't obvious to everyone because sometimes it's based on true facts. There's often not going to be an obvious place where the facts stop and the fiction starts.
The raw Internet is going to turn into a mountain of this stuff. Authenticating information is going to become a lot more important.
All of these models seem to require a human to evaluate and edit the results. Even Co-Pilot. In theory this will reduce the number of human hours required to write text or create images. But I haven't seen anyone doing that successfully at scale or solving the associated problems yet.
I'm pessimistic about the current state of AI research. It seems like it's been more of the same for many years now.
... ... ...
Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.
We can worry about two things at once. We can be especially worried that at some point (maybe decades away, potentially years away), we'll have nuclear weapons and rampant AGI.