I theorise that revolution would be near-impossible in post-AGI world. If people consider where power comes from it's relatively obvious that people will likely suffer and die on mass if we ever create AGI.
Historically the general public have held the vast majority of power in society. 100+ years ago this would have been physical power – the state has to keep you happy or the public will come for them with pitchforks. But in an age of modern weaponry the public today would be pose little physical threat to the state.
Instead in todays democracy power comes from the publics collective labour and purchasing power. A government can't risk upsetting people too much because a government's power today is not a product of its standing army, but the product of its economic strength. A government needs workers to create businesses and produce goods and therefore the goals of government generally align with the goals of the public.
But in an post-AGI world neither businesses or the state need workers or consumers. In this world if you want something you wouldn't pay anyone for it or workers to produce it for you, instead you would just ask your fleet of AGIs to get you the resource.
In this world people become more like pests. They offer no economic value yet demand that AGI owners (wherever publicly or privately owned) share resources with them. If people revolted any AGI owner would be far better off just deploying a bioweapon to humanely kill the protestors rather than sharing resources with them.
Of course, this is assuming the AGI doesn't have it's own goals and just sees the whole of humanely as nuance to be stepped over in the same way humans will happy step over animals if they interfere with our goals.
Imo humanity has 10-20 years left max if we continue on this path. There can be no good outcome of AGI because it would even make sense for the AGI or those who control the AGI to be aligned with goals of humanity.
I agree but for a different reason. It's very hard to outsmart an entity with an IQ in the thousands and pervasive information gathering. For a revolution you need to coordinate. The Chinese know this very well and this is why they control communication so closely (and why they had Apple restrict AirDrop). But their security agencies are still beholden to people with average IQs and the inefficient communication between them.
An entity that can collect all this info on its own and have a huge IQ to spot patterns and not have to communicate it to convince other people in its organisation to take action, that will crush any fledgling rebellion. It will never be able to reach critical mass. We'll just be ants in an anthill and it will be the boot that crushes us when it feels like it.
This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Waking up every morning means believing there are others who will cooperate with you.
Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
Tried, and succeeded in. In times where people held more power than today. Not sure what point you're trying to make here.
> Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
I agree that most of humanity has empathy for others — but it's been shown that the prevalence of psychopaths increases as you climb the leadership ladder.
Fear or hope are the responses of the passive. There are other routes to take.
Like we can satisfy the hunting and retrieval instincts of dogs by throwing a stick, surely an AI that is 10,000 times more intelligent can devise a stick-retrieval-task for humans in a way that feels like satisfying achievement and meaningful work from our perspective.
(Leaving aside the question of whether any of that is a likely or desirable outcome.)
If the many have access to the latest AI then there is less chance the masses are blindsided by some rogue tech.
I feel the limitations of humans are quite a feature when you think about what the experience of life would be like if you couldn’t forget or experienced things for the first time. If you already knew everything and you could achieve almost anything with zero effort. It actually sounds…insufferable.
That will be quite a hard thing to pull off, even for some evil person with a AGI. Let's say Putin gets AGI and is actually evil and crazy enough to try wipe people out. If he just targets Russians and starts killing millions of people daily with some engineered virus or something similar, he'll have to fear a strike from the West which would be fearful they're next (and rightfully so). If he instead tries to wipe out all of humanity at once to escape a second strike, he again will have to devise such a good plan there won't be any second strike - meaning his "AGI" will have to be way better than all other competing AGIs (how exactly?).
It would have made sense if all "owners of AGI" somehow conspired together to do this but there's not really such a thing as owners of AGI and even if there was Chinese, Russian and American owners of AGI don't trust each other at all and are also bound to their governments.
Technology changes things though. Things aren't "the same as it ever was". The Napoleonic wars killed 6.5 million people with muskets and cannons. The total warfare of WWII killed 70 to 85 million people with tanks, turboprop bombers, aircraft carriers, and 36 kilotons TNT of Atomic bombs, among other weaponry.
Total war today includes modern thermonuclear weapons. In 60 seconds, just one Ohio class submarine can launch 80 independent warheads, totaling over 36 megatons of TNT. That is over 20 times more than all explosives, used by all sides, for all of WWII, including both Atomic bombs.
AGI is a leap forward in power equivalent to what thermonuclear bombs are to warfare. Humans have been trying to destroy each other for all of time but we can only have one nuclear war, and it is likely we can only have one AGI revolt.
Like if you're truly afraid of this, what are you doing here on HN? Go organize and try to do something about this.
It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t.
I hope for a future of abundance for all, brought to us by technology. But I understand that some existential threats only need to turn the wrong way once, and there will be no second chance ever.
>It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t
Okay, you've laid out two paths here. What are *you* doing to influence the course we take? That's my point. Enumerating all the possible ways humanity faces extinction is nothing more than doomerism if you aren't taking any meaningful steps to lessen the likelihood any of them may occur.