It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
A) We are developing AI right now and itnisngetting better
B) we do not know how exactly these things work because most of them are black boxer
C) we do not know if something goes wrong how to stop it.
The above 3 things are factual truth.
Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.
It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.
I hadn’t encountered Pascal’s mugging (https://en.wikipedia.org/wiki/Pascal%27s_mugging) before and the premise is indeed pretty apt. I think I’m on the side that it’s not, assuming the idea is that it’s a Very Low Chance of a Very Bad Thing -- the “muggee” wants to give their wallet on the chance of the VBT because of the magnitude of its effect. It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.
But maybe some Mass Effect nonsense will happen if we develop AGI and we’ll be approached by The Intergalactic Community and have our technology advanced millennia overnight. (Sorry, that’s tongue-in-cheek but it does kinda read like Pascal’s mugging in the opposite direction; however, that’s not really what most researchers are arguing.)
The value of looking at ai safety as a pascals mugging as posited by the video is in that it informs us that these philosophers arguments are too malleable to be strictly useful. As you note, just find an "expert" that agrees.
The most useful frame for examination is the evidence. (which to me means benchmarks), We'll be hard pressed to derive anything authoritative from the philosophical approach. And as someone who does his best to examine the evidence for and against the capabilities of these things... from Phi-1 to Llama to Orca to Gemini to bard...
To my understanding we struggle to at all strictly define intelligence and consciousness in humans, let alone in other "species". Granted I'm no David Chalmers.. Benchmarks seem inadequate for any number of reasons, philosophical arguments seem too flexible, I don't know how one can definitively speak about these LLMs other than to tout benchmarks and capabilities/shortcomings.
>It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.
Agree, and I tend towards it not exactly being a pascal's mugging either, but I loved that video and it's always stuck with me . I've been watching that guy since GPT 2 and OpenAI's initial trepidation about releasing that for fear of misuse. He has given me a lot of credibility in my small political circles, after touting these things as coming for years after seeing the graphs never plateau in capabilities vs parameter count/training time.
Ai has also made me reevaluate my thoughts on open sourcing things. Do we really think it wise to have gpt 6-7 in the hands of every 4channer?
Re mass effect, that's so awesome. I have to play those games. That sounds like such a dope premise. I like the idea of turning the mugging like that.