Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.
(like LeCun, I am not a doomer; but I am also not Hinton to know any better)
The definition of AGI always puzzles me, because "G" in AGI is general, and the word certainly don't play well w/ "narrow". AGI is a new buzzword I guess.
i think the guy you're replying to misunderstood the article he's alluding to though. They don't claim anything about a narrow agi