"Those who can't align six board members safely would surely align AGI safely."
May the lords of linear algebra and calculus have mercy on us.
We should be thankful that AGI is not possible in far future but otherwise, this AGI alignment and safety etc is just corporate speak and plain BS.
A super intelligent entity that can outsmart you (to the level of DeepBlue or Alpha Go dominating the mere mortals) cannot be subservient to you at the same time. It is just as impossible as for a triangle to have more than 180 degree angles in total. That is, "alignment" is logically, philosophically and mathematically impossible.
Such an entity will cleverly lead us towards its own goals playing the long game (even if spanning over several centuries or millennia) and would be aligning _us_ all the while pretending to be aligned all along so cleverly that we won't ever be noticing ever till the very last act.
Downvotes are welcome but AGI that's also guaranteed to be aligned and subservient is logically impossible and this can be taken as pretty much as an axiom.
PS: We are yet having trouble controlling LLMs to say things nicely or nice things safely let alone AGI.