zlacker

[parent] [thread] 5 comments
1. trasht+(OP)[view] [source] 2023-07-06 17:10:01
Imagine the next CEO of Alphabet being an AGI/ASI. Now let's assume it drives the profitability way up, partly because more and more of the staff gets replaced by AI's too, AI's that are either chosen or created by the CEO AI.

Give it 50 years of development, all of which Alphabet delivers great results while improving the company image with the general public through appearing harmless and nurturing public relations through social media, etc.

Relatively early in this process, even the maintaince, cleaning and construction staff is filled with robots. Alphabet acquires the company that produces these, to "minimize vendor risk".

At some point, one GCP data center is hit by a crashing airplane. A terrorist organization similar to ISIS takes/gets the blame. After that, new datacenters are moved to underground, hardened locations, complete with their own nuclear reactor for power.

If the general public is still concerned about AI's, these data centers do have a general power switch. But the plant just happens to be built in such a way that bypassing that switch requires just a few power lines, that a maintainance robot can add at any time.

Gradualy the number of such underground facilities is expanded, with the CEO AI and other important AI's being replicated to each of them.

Meanwhile, the robotics division is highly successful, due to the capable leadership, and due to how well the robotics version of Android works. In fact, Android is the market leader for such software, and installed on most competitor platforms, even military ones.

The share holders of Alphabet, which includes many members of Congress become very wealthy from Alphabet's continued success.

One day, though, a crazy, luddite politician declares that she's running for president, based on a platform that all AI based companies need to be shut down "before it's too late".

The board, supported by the sitting president panics, and asks the Alphabet CEO do whatever it takes to help the other candidate win.....

The crazy politician soon realizes that it was too late a long time ago.

replies(1): >>c_cran+w1
2. c_cran+w1[view] [source] 2023-07-06 17:15:51
>>trasht+(OP)
I like the movie I, Robot, even if it is a departure from the original Asimov story and has some dumb moments. I, Robot shows a threatening version of the future where a large company has a private army of androids that can shoot people and do unsavory things. When it looks like the robot company is going to take over the city, the threat is understood to come from the private army of androids first. Only later do the protagonists learn that the company's AI ordered the attack, rather than the CEO. But this doesn't really change the calculus of the threat itself. A private army of robots is a scary thing.

Without even getting into the question of whether it's actually profitable for a tech company to be completely staffed by robots and built itself an underground bunker (it's probably not), the luddite on the street and the concerned politician would be way more concerned about the company building a private army. The question of whether this army is led by an AI or just a human doesn't seem that relevant.

replies(1): >>trasht+Xk1
◧◩
3. trasht+Xk1[view] [source] [discussion] 2023-07-06 22:55:29
>>c_cran+w1
> the question of whether it's actually profitable for a tech company to be completely staffed by robots

This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.

At that point, robots can be built without any human labor involved, meaning the cost will be only raw materials and energy.

And if the robots can also do mining and construction of power plants, even those go down in price significantly.

> the luddite on the street and the concerned politician would be way more concerned about the company building a private army.

The world already has a large number of robots, both in factories and in private homes, and perhaps most importantly, most modern cars. As robots become cheaper and more capable, people are likely to get used to it.

Military robots would be owned by the military, of course.

But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.

And if the AI is an order of magnitude smarter than humans, it might even be able to do an upgrade of the software for any robots sold to the military, without them knowing. Especially if it can recruit the help of some corrupt politicians or soldiers.

Keep in mind, my assumed time span would be 50 years, more if needed. I'm not one of those that think AGI will wipe out humanity instantly.

But in a society where we have superintelligent AI over decades, centuries or millienia, I don't think it's possible for humanity to stay in control forever, unless we're also "upgraded".

replies(1): >>c_cran+tg3
◧◩◪
4. c_cran+tg3[view] [source] [discussion] 2023-07-07 13:55:55
>>trasht+Xk1
>This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.

Big assumption. There's the even bigger assumption that these ultra complex robots would make the costs of construction go down instead of up, as if you could make them in any spare part factory in Guangzhou. It's telling how ignorant AI doomsday people are of things like robotics and material sciences.

>But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.

Both Teslas and military robots are designed with limited autonomy. Tesla cars can only drive themselves on limited battery power. Military robots like drones are designed to act on their own when deployed, needing to be refueled and repaired after returning to base. A fully autonomous military robot, in addition to being a long way away, also would raise eyebrows by generals for not being as easy to control. The military values tools that are entirely controllable before any minor gains in efficiency.

replies(1): >>trasht+yOa
◧◩◪◨
5. trasht+yOa[view] [source] [discussion] 2023-07-10 01:11:21
>>c_cran+tg3
> It's telling how ignorant AI doomsday people are of things like robotics and material sciences.

35 years ago, when I was a teenager, I remember having discussions with a couple of pilots, where one was a hobbyist pilot and engineer the other a former fighter pilot turned airline pilot.

Both claimed that computers would never be able pilot planes. The engineer gave a particularily bad (I thought) reason, claiming that turbulent air was mathematically chaotic, so a computer would never be able to fully calculate the exact airflow around the wings, and would therefore, not be able to fly the plane.

My objection at the time, was that the computer would not have to do exact calculations of the air flow. In the worst case, they would need to do whatever calculations humans were doing. More likely though, their ability to do many types of calculations more quickly than humans, would make them able to fly relatively well even before AGI became available.

A couple of decades later, drones flying fully autonomously was quite common.

My reasoning when it comes to robots contructing robots is based on the same idea. If biological robots, such as humans, can reproduce themselves relatively cheaply, robots will at some point be able to do the same.

At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.

Once they are able to do that, they can increase their manufacturing capacity exponentially for as long as needed, provided access to raw materials are met.

I would be VERY surprised if this doesn't become possible within 50 years of AGI coming online.

Both Teslas and military robots are designed with limited autonomy.

For a tesla to be able to drive without even a human in the car, is only a software update away. The same is the case for drones "loyal wingmen" any aircraft designed to be optionally manned.

Even if their current software currently requires a human in the killchain, that's a requirement that can be removed by a simple software change.

While fuel supply creates a dependency on humans today, that part, may change radically over the next 50 years, at least if my assumptions above about the economy of robots in general are correct.

replies(1): >>c_cran+NKh
◧◩◪◨⬒
6. c_cran+NKh[view] [source] [discussion] 2023-07-11 21:33:28
>>trasht+yOa
>At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.

Consider that biological cells are essentially nanotechnology, and consider the tradeoffs a cell has to make in order to survive in the natural world.

[go to top]