I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.
Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.
I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.
Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.
But when you go beyond that, it becomes difficult to ignore trend lines.
Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman