So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.
This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.
I do agree they don't fully explore the implications. But they do consider things like coordination amongst many agents.
And, each chat is not autonomous but integrated with other intelligent systems.
So, with more multiplicity, I think thinks work differently. More ecologically. For better and worse.