It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
A) We are developing AI right now and itnisngetting better
B) we do not know how exactly these things work because most of them are black boxer
C) we do not know if something goes wrong how to stop it.
The above 3 things are factual truth.
Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.
It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.
50 years from now, corporations may be run entirely by AI entities, if they're cheaper, smarter and more efficient at almost any role in the company. At that point, they may be impossible to turn off, and we may not even notice if one group of such entitites start to plan to take over control of the physical world from humans.
Have we literally forgotten how physical possession of the device is the ultimate trump card?
Get thee to a 13th century monastery!
I guess we could shoot it, and your gonna be like boooooooo that's terminator or irobot, but what if we make millions and they they decide they no longer like humans.
They could very well be much smarter then us by then.
But the main point is that AGI's don't have to wipe us out as soon as they reach superintelligence, even if they're poorly aligned. Instead, they will do more and more of the work currently being done by humans. Non-embodied robots can do all mental work, including engineering. Sooner or later, robots will become competitive at manual labor, such as construction, agriculture and eventually anything you can think of.
For a time, humanity may find themselves in a post-scarcity utopia, or we may find ourselves in a Cyberpunk dystopia, with only the rich actually benefitting.
In each case, but especially the latter, there may still be some (or more than some) "luddites" who want to tear down the system. The best way for those in power to protect against that, is to use robots first for private security and eventually the police and military.
By that point, the violence monopoly is completely in the hands of the AI's. And if the AI's are not aligned with our values at that point, we have as little of a shot at regaining control as a group of chimps in a zoo as of toppling the US government.
Now, I don't think this will happen by 2030, and probably not even 2050. But some time between 2050 and 2500 is quite possible, if we develop AI that is not properly aligned (or even if it is aligned, though in that case it may gain the power, but not misuse it).
An H100 could fit in a Tesla, and a large Tesla car battery could run an H100 for a working day before it needs recharging.