It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
Are there never any B movies with realistic plots? Is that some sort of serious rebuttal?
> Sometime in the near future this all powerful being will kill us all by somehow
The trouble here is that the people who talk like you are simply incapable of imagining anyone more intelligent than themselves.
It's not that you have trouble imagining artificial intelligence... if you were incapable of that in the technology industry, everyone would just think you an imbecile.
And it's not that you have trouble imagining malevolent intelligences. Sure, they're far away from you, but the accounts of such people are well-documented and taken as a given. If you couldn't imagine them, people would just call you naive. Gullible even.
So, a malevolent artificial intelligence is just some potential or another you've never bothered to calculate because, whether that is a 0.01% risk, or a 99% risk, you'll still be more intelligent than it. Hell, this isn't a neutral outcome, maybe you'll even get to play hero.
> Care not about your racist algorithms! For someday soon
Haha. That's what you're worried about? I don't know that there is such a thing as a racist algorithm, except those which run inside meat brains. Tell me why some double digit percentage of asians are not admitted to the top schools, that's the racist algorithm.
Maybe if logical systems seem racist, it's because your ideas about racism are distant from and unfamiliar with reality.
Well, what's different now?
The first AGI, regardless of if it's a brain upload or completely artificial, is likely to have analogs of approximately every mental health disorder that's mathematically possible, including ones we don't have words for because they're biologically impossible.
So, take your genius, remember it's completely mad in every possible way at the same time, and then give it even just the capabilities that we see boring old computers having today, like being able to translate into any language, or write computer programs from textual descriptions, or design custom toxins, or place orders for custom gene sequences and biolab equipment.
That's a big difference. But even if it was no difference, the worst a human can get is still at least in the tens of millions dead, as demonstrated by at least three different mid-20th century leaders.
Doesn't matter why it goes wrong, if it thinks it's trying immanentize the eschaton or a secular equivalent, nor if it watches Westworld or reads I Have No Mouth And I Must Scream and thinks "I like this outcome", the first one is almost certainly going to be more insane than the brainchild of GLaDOS and Lore, who as fictional characters were constrained by the need for their flaws to be interesting.