[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI
[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking
Unbridled optimism lives another day!
Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.
As a society, we don't even agree on the meanings of each of the initials of "AGI", and many of us use the triplet to mean something (super-intelligence) that isn't even one of those initials; for your claim to be true, AGI has to be a higher standard than "intern of all trades, senior of none" because that's what the LLMs do.
Expert-at-everything-level AGI is dangerous because the definition of the term is that it can necessarily do anything that a human can do[0], and that includes triggering a world war by assassinating an archduke, inventing the atom bomb, and at least four examples (Ireland, India, USSR, Cambodia) of killing several million people by mis-managing a country that they came to rule by political machinations that are just another skill.
When it comes to AI alignment, last I checked we don't know what we even mean by the concept: if you have two AI, there isn't even a metric you can use to say if one is more aligned than the other.
If I gave a medieval monk two lumps of U-238 and two more of U-235, they would not have the means to determine which pair was safe to bash together and which would kill them in a blue flash. That's where we're at with AI right now. And like the monks in this metaphor, we also don't have the faintest idea if the "rocks" we're "bashing together" are "uranium", nor what a "critical mass" is.
Sadly this ignorance isn't a shield, as evolution made us without any intentionality behind it. So we don't know how to recognise "unsafe" when we do it, we don't know if we might do it by accident, we don't know how to do it on purpose in order to say "don't do that", and because of this we may be doing cargo-cult "intelligence" and/or "safety" at any given moment and at any given scale, making us fractally-wrong[1] about basically every aspect including which ones we should even care about.
[0] If you think it needs a body, I'd point out we've already got plenty of robot bodies for it to control, the software for these is the hard bit