There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.
If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.
I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.
However, I also doubt the premise.
> If this were true, intelligent people would have taken over society by now
The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".
You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.
The whole justification for keeping consumers happy or healthy goes right out the window.
Same for human workers.
All that matters is that your robots and AIs aren't getting smashed by their robots and AIs.
This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.
It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.
Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.
Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].
It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!
Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.
[0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N
This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.
Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.
It was the expectation of many people in the field in the 1980s, too
So who is right?
Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.
Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).
Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.
Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.
For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.
Pessimism isn't insight. There is no substitute for the hard work of "try and see."
they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.
My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.
I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.
We overestimate the short term progress, but underestimate the medium, long term one.
And it _could_ be just one clever breakthrough away, and that could happen tomorrow, or it could be centuries away. There's no way to know.
Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.
In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.
That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.
UK productivity growth, 1990-2007: 2% per year
UK productivity growth, 2010-2019: 0.5% per year
So they're both right. US 50 year productivity growth looks great, UK 10 year productivity growth looks pretty awful.
Perhaps some kind of garanteed minimal income would be implemented, but we would probably see a shrinkage or complete destruction of the middle class, and massive increases in wealth inequality.
LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.
Only in very simplistic theory. :(
In practical terms, businesses with high margins seem able to afford government protection (aka "buy some politicians").
So they lock out competition, and with their market captured, price gouging (or close to it) is the order of the day.
No real sure why anyone thinks the playbook would be any different just because "AI" is used on the production side. It's still the same people making the calls, just with extra tools available to them.
Of course the problem is whether or not it could be controlled, and in that case, the best hope is simply 'it' being benevolent and naturally incentivized to create such a utopia.
Weird that the field of economics just keeps on existing.