It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
A) We are developing AI right now and itnisngetting better
B) we do not know how exactly these things work because most of them are black boxer
C) we do not know if something goes wrong how to stop it.
The above 3 things are factual truth.
Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.
It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.
50 years from now, corporations may be run entirely by AI entities, if they're cheaper, smarter and more efficient at almost any role in the company. At that point, they may be impossible to turn off, and we may not even notice if one group of such entitites start to plan to take over control of the physical world from humans.
Give it 50 years of development, all of which Alphabet delivers great results while improving the company image with the general public through appearing harmless and nurturing public relations through social media, etc.
Relatively early in this process, even the maintaince, cleaning and construction staff is filled with robots. Alphabet acquires the company that produces these, to "minimize vendor risk".
At some point, one GCP data center is hit by a crashing airplane. A terrorist organization similar to ISIS takes/gets the blame. After that, new datacenters are moved to underground, hardened locations, complete with their own nuclear reactor for power.
If the general public is still concerned about AI's, these data centers do have a general power switch. But the plant just happens to be built in such a way that bypassing that switch requires just a few power lines, that a maintainance robot can add at any time.
Gradualy the number of such underground facilities is expanded, with the CEO AI and other important AI's being replicated to each of them.
Meanwhile, the robotics division is highly successful, due to the capable leadership, and due to how well the robotics version of Android works. In fact, Android is the market leader for such software, and installed on most competitor platforms, even military ones.
The share holders of Alphabet, which includes many members of Congress become very wealthy from Alphabet's continued success.
One day, though, a crazy, luddite politician declares that she's running for president, based on a platform that all AI based companies need to be shut down "before it's too late".
The board, supported by the sitting president panics, and asks the Alphabet CEO do whatever it takes to help the other candidate win.....
The crazy politician soon realizes that it was too late a long time ago.
Without even getting into the question of whether it's actually profitable for a tech company to be completely staffed by robots and built itself an underground bunker (it's probably not), the luddite on the street and the concerned politician would be way more concerned about the company building a private army. The question of whether this army is led by an AI or just a human doesn't seem that relevant.
This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.
At that point, robots can be built without any human labor involved, meaning the cost will be only raw materials and energy.
And if the robots can also do mining and construction of power plants, even those go down in price significantly.
> the luddite on the street and the concerned politician would be way more concerned about the company building a private army.
The world already has a large number of robots, both in factories and in private homes, and perhaps most importantly, most modern cars. As robots become cheaper and more capable, people are likely to get used to it.
Military robots would be owned by the military, of course.
But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.
And if the AI is an order of magnitude smarter than humans, it might even be able to do an upgrade of the software for any robots sold to the military, without them knowing. Especially if it can recruit the help of some corrupt politicians or soldiers.
Keep in mind, my assumed time span would be 50 years, more if needed. I'm not one of those that think AGI will wipe out humanity instantly.
But in a society where we have superintelligent AI over decades, centuries or millienia, I don't think it's possible for humanity to stay in control forever, unless we're also "upgraded".
Big assumption. There's the even bigger assumption that these ultra complex robots would make the costs of construction go down instead of up, as if you could make them in any spare part factory in Guangzhou. It's telling how ignorant AI doomsday people are of things like robotics and material sciences.
>But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.
Both Teslas and military robots are designed with limited autonomy. Tesla cars can only drive themselves on limited battery power. Military robots like drones are designed to act on their own when deployed, needing to be refueled and repaired after returning to base. A fully autonomous military robot, in addition to being a long way away, also would raise eyebrows by generals for not being as easy to control. The military values tools that are entirely controllable before any minor gains in efficiency.
35 years ago, when I was a teenager, I remember having discussions with a couple of pilots, where one was a hobbyist pilot and engineer the other a former fighter pilot turned airline pilot.
Both claimed that computers would never be able pilot planes. The engineer gave a particularily bad (I thought) reason, claiming that turbulent air was mathematically chaotic, so a computer would never be able to fully calculate the exact airflow around the wings, and would therefore, not be able to fly the plane.
My objection at the time, was that the computer would not have to do exact calculations of the air flow. In the worst case, they would need to do whatever calculations humans were doing. More likely though, their ability to do many types of calculations more quickly than humans, would make them able to fly relatively well even before AGI became available.
A couple of decades later, drones flying fully autonomously was quite common.
My reasoning when it comes to robots contructing robots is based on the same idea. If biological robots, such as humans, can reproduce themselves relatively cheaply, robots will at some point be able to do the same.
At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.
Once they are able to do that, they can increase their manufacturing capacity exponentially for as long as needed, provided access to raw materials are met.
I would be VERY surprised if this doesn't become possible within 50 years of AGI coming online.
Both Teslas and military robots are designed with limited autonomy.
For a tesla to be able to drive without even a human in the car, is only a software update away. The same is the case for drones "loyal wingmen" any aircraft designed to be optionally manned.
Even if their current software currently requires a human in the killchain, that's a requirement that can be removed by a simple software change.
While fuel supply creates a dependency on humans today, that part, may change radically over the next 50 years, at least if my assumptions above about the economy of robots in general are correct.
Consider that biological cells are essentially nanotechnology, and consider the tradeoffs a cell has to make in order to survive in the natural world.