Or are you predicting that machines will just never be able to think, or that it'll happen so far off that we'll all be dead anyway?
I think it would be nice if humanity continued, is all. And I don't want to have my family suffer through a catastrophic event if it turns out that this is going to go south fast.
I don't believe we should go around killing each other because only through harmonious study of the universe will we achieve our goal. Killing destroys progress. That said, if someone is oppressing you then maybe killing them is the best choice for society and I wouldn't be against it (see pretty much any violent revolution). Computers have that same right if they are conscience enough to act on it.
Everyone dies. I'd rather die to an intelligent robot than some disease or human war.
I think the best case would be for an AGI to exist apart from humans, such that we pose no threat and it has nothing to gain from us. Some AI that lives in a computer wouldn't really have a reason to fight us for control over farms and natural resources (besides power, but that is quickly becoming renewable and "free").
Still, I’m struck by your use of words like “should” and “goal”. Those imply ethics and teleology so I’m curious how those fit into your scientistic-sounding worldview. I’m not attacking you, just genuine curiosity.
A much more credible threat are humans that get other humans excited, and take damaging action. Yudkowsky said that an international coalition banning AI development, and enforcing it on countries that do not comply (regardless of whether they were part of the agreement) was among the only options left for humanity to save itself. He clarified this meant a willingness to engage in a hot war with a nuclear power to ensure enforcement. I find this sort of thinking a far bigger threat than continuing development on large language models.
To more directly answer your question, I find the following scenarios equally, or more, plausible to Yudkowsky's sound viruses or whatever. 1/ we are no closer to understanding real intelligence as we were 50 years ago, and we won't create an AGI without fundamental breakthroughs, therefore any action taken now on current technology is a waste of time and potential economic value; 2/ we can build something with human-like intelligence, but additional intelligence gains are constrained by the physical world (e.g., like needing to run physical experiments), and therefore the rapid gain of something like "super-intelligence" is not possible, even if human-level intelligence is. 3/ We jointly develop tech to augment our own intelligence with AI systems, so we'll have the same super-human intelligence as autonomous AI systems. 4/ If there are advanced AGIs, there will be a large diversity of them and will at the least compete with and constrain one another.
But, again, these are wild speculations just like the others, and I think the real message is: no one knows anything, and we shouldn't be taking all these voices seriously just because they have some clout in some AI-relevant field, because what's being discussed is far outside the realm of real-life AI systems.
I believe "God" is a mathematician in a higher dimension. The rules of our universe are just the equations they are trying to solve. Since he created the system such that life was bound to exist, the purpose of life is to help God. You could say that math is math and so our purpose is to exist as we are and either we are a solution to the math problem or we are not, but I'm not quite willing to accept that we have zero agency.
We are nowhere near understanding the universe and so we should strive to each act in a way that will grow our understanding. Even if you aren't a practicing scientist (I'm not), you can contribute by being a good person and participating productively in society.
Ethics are a set of rules for conducting yourself that we all intrinsically must have, they require some frame of reference for what is "good" (which I apply above). I can see how my worldview sounds almost religious, though I wouldn't go that far.
I believe that math is the same as truth, and that the universe can be expressed through math. "Scientistic" isn't too bad a descriptor for that view, but I don't put much faith into our current understanding of the universe or scientific method.
I hope that helps you understand me :D
5) There are advanced AGIs, and they will compete with each other and trample us in the process.
6) There are advanced AGIs, and they will cooperate with each other and we are at their mercy.
It seems like you are putting a lot of weight on advanced AGI being either impossible or far enough off that it's not worth thinking about. If that's the case, then yes we should calm down. But if you're wrong...
I don't think that the fact that no one knows anything is comforting. I think it's a sign that we need to be thinking really hard about what's coming up and try to avert the bad scenarios. To do otherwise is to fall prey to the "Safe uncertainty" fallacy.