There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).
We overestimate the short term progress, but underestimate the medium, long term one.
Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.