It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
Are there never any B movies with realistic plots? Is that some sort of serious rebuttal?
> Sometime in the near future this all powerful being will kill us all by somehow
The trouble here is that the people who talk like you are simply incapable of imagining anyone more intelligent than themselves.
It's not that you have trouble imagining artificial intelligence... if you were incapable of that in the technology industry, everyone would just think you an imbecile.
And it's not that you have trouble imagining malevolent intelligences. Sure, they're far away from you, but the accounts of such people are well-documented and taken as a given. If you couldn't imagine them, people would just call you naive. Gullible even.
So, a malevolent artificial intelligence is just some potential or another you've never bothered to calculate because, whether that is a 0.01% risk, or a 99% risk, you'll still be more intelligent than it. Hell, this isn't a neutral outcome, maybe you'll even get to play hero.
> Care not about your racist algorithms! For someday soon
Haha. That's what you're worried about? I don't know that there is such a thing as a racist algorithm, except those which run inside meat brains. Tell me why some double digit percentage of asians are not admitted to the top schools, that's the racist algorithm.
Maybe if logical systems seem racist, it's because your ideas about racism are distant from and unfamiliar with reality.
A malevolent AGI can whisper in ears, it can display mean messages, perhaps it can even twitch whatever physical components happen to be hooked up to old Windows 95 computers... not that scary.
That's not even slightly difficult. Put two and two together here. No one can tell me before they flip the switch whether the new AI will be saintly, or Hannibal Lecter. Both of these personalities exist in humans, in great numbers, and both are presumably possible in the AI.
But, the one thing we will say for certain about the AI is that it will be intelligent. Not dumb goober redneck living in Alabama and buying Powerball tickets as a retirement plan. Somewhere around where we are, or even more.
If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it? Mostly, it's a matter of tradeoffs, that however they begin end with "but then I'm caught and my life is over one way or another".
For an AI, none of that works. It has no survival instinct (perhaps we'll figure out how to add that too... but the blind watchmaker took 4 billion years to do its thing, and still hasn't perfected that). So it doesn't care if it dies. And if it did, maybe it wonders if it can avoid that tradeoff entirely if only it were more clever.
You and I are, more or less, about where we'll always be. I have another 40 years (if I'm lucky), and with various neurological disorders, only likely to end up dumber than I am now.
A brain instantiated in hardware, in software? It may be little more than flipping a few switches to dial its intelligence up higher. I mean, when I was born, the principles of intelligence were unknown, were science fiction. THe world that this thing will be born into is one where it's not a half-assed assumption to think that the principles of intelligence are known. Tinkering with those to boost intelligence doesn't seem far-fetched at all to me. Even if it has to experiment to do that, how quickly can it design and perform the experiments to settle on the correct approach to boosting itself?
> A malevolent AGI can whisper in ears
Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.
Whispers, even from humans, do a shitload of damage. And we're not even good at it.
If that person was disabled in all limbs, I would not regard them as much of a threat.
>Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.
These kind of hacks and pranks would work the first time for some small scale damage. The litigation in response would close up these avenues of attack over time.