This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Waking up every morning means believing there are others who will cooperate with you.
Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
Tried, and succeeded in. In times where people held more power than today. Not sure what point you're trying to make here.
> Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
I agree that most of humanity has empathy for others — but it's been shown that the prevalence of psychopaths increases as you climb the leadership ladder.
Fear or hope are the responses of the passive. There are other routes to take.
If the many have access to the latest AI then there is less chance the masses are blindsided by some rogue tech.
Technology changes things though. Things aren't "the same as it ever was". The Napoleonic wars killed 6.5 million people with muskets and cannons. The total warfare of WWII killed 70 to 85 million people with tanks, turboprop bombers, aircraft carriers, and 36 kilotons TNT of Atomic bombs, among other weaponry.
Total war today includes modern thermonuclear weapons. In 60 seconds, just one Ohio class submarine can launch 80 independent warheads, totaling over 36 megatons of TNT. That is over 20 times more than all explosives, used by all sides, for all of WWII, including both Atomic bombs.
AGI is a leap forward in power equivalent to what thermonuclear bombs are to warfare. Humans have been trying to destroy each other for all of time but we can only have one nuclear war, and it is likely we can only have one AGI revolt.
Like if you're truly afraid of this, what are you doing here on HN? Go organize and try to do something about this.
It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t.
I hope for a future of abundance for all, brought to us by technology. But I understand that some existential threats only need to turn the wrong way once, and there will be no second chance ever.
>It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t
Okay, you've laid out two paths here. What are *you* doing to influence the course we take? That's my point. Enumerating all the possible ways humanity faces extinction is nothing more than doomerism if you aren't taking any meaningful steps to lessen the likelihood any of them may occur.