I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.
All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.
Maybe we'll see "Church of the Children of Altman" /s
It seems without a framework of ethics/morality (insert XYZ religion), us humans find one to grasp onto. Be it a cult, a set of not-so-fleshed-out ideas/philosophies etc.
People who say they aren't religious per-se, seem to have some set of beliefs that amount to religion. Just depends who or what you look towards for those beliefs, many of which seem to be half-hazard.
People I may disagree with the most, many times at least have a realization of what ideas/beliefs are unifying their structure of reality, with others just not aware.
A small minority of people can rely on schools of philosophical thought, and 'try on' or play with different ideas, but have a self-reflection that allows them to see when they transgress from ABC philosophy or when the philosophy doesn't match with their identity to a degree.