If nobody understands how an LLM is able to achieve it's current level of intelligence, how is anyone so sure that this intelligence is definitely going to increase exponentially until it's better than a human?
There are real existential threats that we know are definitely going to happen one day (meteor, supervolcano, etc), and I believe that treating AGI like it is the same class of "not if; but when" is categorically wrong, furthermore, I think that many of the people leading the effort to frame it this way are doing so out of self-interest, rather than public concern.
And if we're going to put gobs of money and brainpower into attempting to make superhuman AI, it seems like a good idea to also put a lot of effort into making it safe. It'd be better to have safe but kinda dumb AI than unsafe superhuman AI, so our funding priorities appear to be backwards.