zlacker

[parent] [thread] 35 comments
1. c_cran+(OP)[view] [source] 2023-07-05 21:14:10
We do know the answer to C. Pull the plug, or plugs.
replies(3): >>jdasdf+E5 >>trasht+Y7 >>ben_w+uh
2. jdasdf+E5[view] [source] 2023-07-05 21:41:49
>>c_cran+(OP)
What happens when it prevents you from doing so?
replies(2): >>goneho+aP >>c_cran+cV1
3. trasht+Y7[view] [source] 2023-07-05 21:54:42
>>c_cran+(OP)
That is only going to be effective it some AI goes rougue very soon after it comes online.

50 years from now, corporations may be run entirely by AI entities, if they're cheaper, smarter and more efficient at almost any role in the company. At that point, they may be impossible to turn off, and we may not even notice if one group of such entitites start to plan to take over control of the physical world from humans.

replies(2): >>jonath+6i >>c_cran+BV1
4. ben_w+uh[view] [source] 2023-07-05 22:49:03
>>c_cran+(OP)
Things we've either not successfully "pulled the plug" on despite the risks, and in some cases despite concerted military actions to attempt a plug-pull, and in other cases that it seems like it should only take willpower to achieve and yet somehow we still haven't: Carbon based fuels, cocaine, RBMK-class nuclear reactors, obesity, cigarettes.

Things we pulled the plug on eventually, while dragging it out, include: leaded fuel, asbestos, radium paint, treating above-ground atomic testing as a tourist attraction.

replies(2): >>c_cran+1V1 >>reveli+A32
◧◩
5. jonath+6i[view] [source] [discussion] 2023-07-05 22:53:01
>>trasht+Y7
Well then clearly the computer will hold everyone hostage.

Have we literally forgotten how physical possession of the device is the ultimate trump card?

Get thee to a 13th century monastery!

replies(1): >>coolja+HO
◧◩◪
6. coolja+HO[view] [source] [discussion] 2023-07-06 02:38:56
>>jonath+6i
You kid yourself if you don't think people will hook them up to bipedal robots with internal batteries as soon as they can.

I guess we could shoot it, and your gonna be like boooooooo that's terminator or irobot, but what if we make millions and they they decide they no longer like humans.

They could very well be much smarter then us by then.

replies(1): >>trasht+Ny1
◧◩
7. goneho+aP[view] [source] [discussion] 2023-07-06 02:42:35
>>jdasdf+E5
People are bad at imagining something a lot smarter than themselves. They think of some smart person they know, they don’t think of themselves compared to a chimp or even bacteria.

An unaligned superintelligent AGI in pursuit of some goal that happens to satisfy its reward, but might be an otherwise a dumb or pointless goal (paperclips) will still play to win. You can’t predict exactly what move AlphaGO will make in the Go game (if you could you’d be able to beat it), but you can still predict it will win.

It’s amusing to me when people claim they will control the superintelligent thing, how often in nature is something more intelligent controlled by something magnitudes less intelligent?

The comments here are typical and show most people haven’t read the existing arguments in any depth or have thought about it rigorously at all.

All of this looks pretty bad for us, but at least Open AI and most others at the front of this do understand the arguments and don’t have the same dumb dismissals (LeCun excepted).

Unfortunately unless we’re lucky or alignment ends up being easier than it looks, the default outcome is failure and it’s hard to see how the failure isn’t total.

replies(1): >>c_cran+3c2
◧◩◪◨
8. trasht+Ny1[view] [source] [discussion] 2023-07-06 09:07:38
>>coolja+HO
Robots, bipedal or not will certainly arrive at some point. I suppose it will take some more time before we can pack enough compute in anything battery driven for the robot itself to have AGI.

But the main point is that AGI's don't have to wipe us out as soon as they reach superintelligence, even if they're poorly aligned. Instead, they will do more and more of the work currently being done by humans. Non-embodied robots can do all mental work, including engineering. Sooner or later, robots will become competitive at manual labor, such as construction, agriculture and eventually anything you can think of.

For a time, humanity may find themselves in a post-scarcity utopia, or we may find ourselves in a Cyberpunk dystopia, with only the rich actually benefitting.

In each case, but especially the latter, there may still be some (or more than some) "luddites" who want to tear down the system. The best way for those in power to protect against that, is to use robots first for private security and eventually the police and military.

By that point, the violence monopoly is completely in the hands of the AI's. And if the AI's are not aligned with our values at that point, we have as little of a shot at regaining control as a group of chimps in a zoo as of toppling the US government.

Now, I don't think this will happen by 2030, and probably not even 2050. But some time between 2050 and 2500 is quite possible, if we develop AI that is not properly aligned (or even if it is aligned, though in that case it may gain the power, but not misuse it).

replies(1): >>ben_w+xC3
◧◩
9. c_cran+1V1[view] [source] [discussion] 2023-07-06 12:11:15
>>ben_w+uh
We haven't pulled the plug on carbon fuels or old nuclear reactors because those things still work and provide benefits. An AI that is trying to kill us instead of doing its job isn't even providing any benefit. It's worse than useless.
replies(1): >>ben_w+wb2
◧◩
10. c_cran+cV1[view] [source] [discussion] 2023-07-06 12:12:00
>>jdasdf+E5
How would it stop one man armed with a pair of wire cutters?
replies(2): >>goneho+dd2 >>MrScru+T6h
◧◩
11. c_cran+BV1[view] [source] [discussion] 2023-07-06 12:14:26
>>trasht+Y7
An AI running a corporation would still be easy to turn off. It's still chained to a physical computer system. It being involved with a corporation just gives it a financial incentive for keeping it on, but current LLMs already have that. At least until the bubble bursts.
replies(1): >>trasht+n83
◧◩
12. reveli+A32[view] [source] [discussion] 2023-07-06 13:04:21
>>ben_w+uh
Pull the plug is meant literally. As in, turn off the power to the AI. Carbon based fuels let alone cocaine don't have off switches. The situation just isn't analogous at all.
replies(1): >>ben_w+H82
◧◩◪
13. ben_w+H82[view] [source] [discussion] 2023-07-06 13:29:14
>>reveli+A32
I assumed literally, and yet the argument applies: we have not been able to stop those things even when using guns to shoot people doing them. The same pressures that keep people growing the plants, processing them, transporting it, selling it, buying it, consuming it, there are many things a system — intelligent or otherwise — can motivate people to keep the lights on.

There were four reactors in Chernobyl plant, the exploding one was 1986, the others were shut down in 1991, 1996, and 2000.

There's no plausible way to guess at the speed of change from a misaligned AI, can you be confident that 14 years isn't enough time to cause problems?

replies(2): >>reveli+ko3 >>SirMas+Lah
◧◩◪
14. ben_w+wb2[view] [source] [discussion] 2023-07-06 13:42:18
>>c_cran+1V1
Do you think AI are unable to provide benefits while also being a risk, like coal and nuclear power? Conversely, what's the benefit of cocaine or cigarettes?

Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.

Nobody knows what any of the risks or mitigations will be, because we haven't done any of it before. All we do know is that optimising systems are effective at manipulating humans, that they can be capable enough to find ways to beat all humans in toy environments like chess, poker, and Diplomacy (the game), and that humans are already using AI (GOFAI, LLMs, SD) without checking the output even when advised that the models aren't very good.

replies(1): >>c_cran+pd2
◧◩◪
15. c_cran+3c2[view] [source] [discussion] 2023-07-06 13:45:11
>>goneho+aP
>All of this looks pretty bad for us, but at least Open AI and most others at the front of this do understand the arguments and don’t have the same dumb dismissals (LeCun excepted).

The OpenAI people have even worse reasoning than the ones being dismissive. They believe (or at least say they believe) in the omnipotence of a superintelligence, but then say that if you just give them enough money to throw at MIRI they can just solve the alignment problem and create the benevolent supergod. All while they keep cranking up the GPU clusters and pushing out the latest and greatest LLMs anyway. If I did take the risk seriously, I would be pretty mad at OpenAI.

◧◩◪
16. goneho+dd2[view] [source] [discussion] 2023-07-06 13:50:06
>>c_cran+cV1
It's not clear humans will even put the AI in 'a box' in the first place given we do gain of function research on deadly viruses right next to major population centers, but assuming for the sake of argument that we do:

The AGI is smarter than you, a lot smarter. If it's goal is to get out of the box to accomplish some goal and some human stands in the way of that it will do what it can to get out, this would include not doing things that sound alarms until it can do what it wants in pursuit of its goal.

Humans are famously insecure - stuff as simple as breaches, manipulation, bribery, etc. but could be something more sophisticated that's hard to predict - maybe something a lot smarter would be able to manipulate people in a more sophisticated way because it understands more about vulnerable human psychology? It can be hard to predict specific ways something a lot more capable will act, but you can still predict it will win.

All this also presupposes we're taking the risk seriously (which largely today we are not).

replies(1): >>c_cran+1e2
◧◩◪◨
17. c_cran+pd2[view] [source] [discussion] 2023-07-06 13:50:54
>>ben_w+wb2
The benefit of cocaine and cigarettes is letting people pass the bar exam.

An AI would provide benefits when it is, say, actually making paperclips. An AI that is killing people instead of making paperclips is a liability. A company that is selling shredded fingers in their paperclips is not long for this world. Even asbestos only gives a few people cancer slowly, and it does that while still remaining fireproof.

>Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.

Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from. But a rogue AI hiding in a car already has very limited capabilities to harm.

replies(1): >>ben_w+hn2
◧◩◪◨
18. c_cran+1e2[view] [source] [discussion] 2023-07-06 13:53:24
>>goneho+dd2
How would the smart AGI stop one man armed with a pair of wirecutters? The box it lives in, the internet, has no exits.

AI is pretty good at chess, but no AI has won a game of chess by flipping the table. It still has to use the pieces on the board.

replies(1): >>trasht+0C4
◧◩◪◨⬒
19. ben_w+hn2[view] [source] [discussion] 2023-07-06 14:27:17
>>c_cran+pd2
> The benefit of cocaine and cigarettes is letting people pass the bar exam.

how many drugs are you on right now? Even if you think you needed them to pass the bar exam, that's a really weird example to use given GPT-4 does well on that specific test.

One is a deadly cancer stick and not even the best way to get nicotine, the other is a controlled substance that gets life-to-death if you're caught supplying it (possibly unless you're a doctor, but surprisingly hard to google).

> An AI would provide benefits when it is, say, actually making paperclips.

Step 1. make paperclip factory.

Step 2. make robots that work in factory.

Step 3. efficiently grow to dominate global supply of paperclips.

Step 4. notice demand for paperclips is going down, advertise better.

Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.

Step 6. notice a technicality, exploit technicality to achieve goals better; exactly what depends on the details of the goal the AI is given and how good we are with alignment by that point, so the rest is necessarily a story rather than an attempt at realism.

(This happens by default everywhere: in AI it's literally the alignment problem, either inner alignment, outer alignment, or mesa alignment; in humans it's "work to rule" and Goodhart's Law, and humans do that despite having "common sense" and "not being a sociopath" helping keep us all on the same page).

Step 7. moon robots do their own thing, which we technically did tell them to do, but wasn't what we meant.

We say things like "looks like these AI don't have any common sense" and other things to feel good about ourselves.

Step 8. Sales up as entire surface of Earth buried under a 43 km deep layer of moon paperclips.

> Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from.

A VPN, obviously.

But also, in context, how does the AI look different from any random criminal? Except probably more competent. Lot of those around, and organised criminal enterprises can get pretty big even when it's just humans doing it.

Also pretty bad even in the cases where it's a less-than-human-generality CrimeAI that criminal gangs use in a way that gives no agency at all to the AI, and even if you can track them all and shut them down really fast — just from the capabilities gained from putting face tracking AI and a single grenade into a standard drone, both of which have already been demonstrated.

> But a rogue AI hiding in a car already has very limited capabilities to harm.

Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").

replies(1): >>c_cran+Gq2
◧◩◪◨⬒⬓
20. c_cran+Gq2[view] [source] [discussion] 2023-07-06 14:37:05
>>ben_w+hn2
>One is a deadly cancer stick and not even the best way to get nicotine, the other is a controlled substance that gets life-to-death if you're caught supplying it (possibly unless you're a doctor, but surprisingly hard to google).

Regardless of these downsides, people use them frequently in the high stress environments of the bar or med school to deal with said stress. This may not be ideal, but this is how it is.

>Step 3. efficiently grow to dominate global supply of paperclips. >Step 4. notice demand for paperclips is going down, advertise better. >Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.

When you talk about using 'advertising power' to put paperclip factories on the moon, you've jumped into the realm of very silly fantasy.

>Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").

Law enforcement agencies have pretty sophisticated means of bypassing VPNs that they would use against an AI that was actually dangerous. If it was just sending out phishing emails and running scams, it would be one more thing to add to the pile.

◧◩◪
21. trasht+n83[view] [source] [discussion] 2023-07-06 17:10:01
>>c_cran+BV1
Imagine the next CEO of Alphabet being an AGI/ASI. Now let's assume it drives the profitability way up, partly because more and more of the staff gets replaced by AI's too, AI's that are either chosen or created by the CEO AI.

Give it 50 years of development, all of which Alphabet delivers great results while improving the company image with the general public through appearing harmless and nurturing public relations through social media, etc.

Relatively early in this process, even the maintaince, cleaning and construction staff is filled with robots. Alphabet acquires the company that produces these, to "minimize vendor risk".

At some point, one GCP data center is hit by a crashing airplane. A terrorist organization similar to ISIS takes/gets the blame. After that, new datacenters are moved to underground, hardened locations, complete with their own nuclear reactor for power.

If the general public is still concerned about AI's, these data centers do have a general power switch. But the plant just happens to be built in such a way that bypassing that switch requires just a few power lines, that a maintainance robot can add at any time.

Gradualy the number of such underground facilities is expanded, with the CEO AI and other important AI's being replicated to each of them.

Meanwhile, the robotics division is highly successful, due to the capable leadership, and due to how well the robotics version of Android works. In fact, Android is the market leader for such software, and installed on most competitor platforms, even military ones.

The share holders of Alphabet, which includes many members of Congress become very wealthy from Alphabet's continued success.

One day, though, a crazy, luddite politician declares that she's running for president, based on a platform that all AI based companies need to be shut down "before it's too late".

The board, supported by the sitting president panics, and asks the Alphabet CEO do whatever it takes to help the other candidate win.....

The crazy politician soon realizes that it was too late a long time ago.

replies(1): >>c_cran+T93
◧◩◪◨
22. c_cran+T93[view] [source] [discussion] 2023-07-06 17:15:51
>>trasht+n83
I like the movie I, Robot, even if it is a departure from the original Asimov story and has some dumb moments. I, Robot shows a threatening version of the future where a large company has a private army of androids that can shoot people and do unsavory things. When it looks like the robot company is going to take over the city, the threat is understood to come from the private army of androids first. Only later do the protagonists learn that the company's AI ordered the attack, rather than the CEO. But this doesn't really change the calculus of the threat itself. A private army of robots is a scary thing.

Without even getting into the question of whether it's actually profitable for a tech company to be completely staffed by robots and built itself an underground bunker (it's probably not), the luddite on the street and the concerned politician would be way more concerned about the company building a private army. The question of whether this army is led by an AI or just a human doesn't seem that relevant.

replies(1): >>trasht+kt4
◧◩◪◨
23. reveli+ko3[view] [source] [discussion] 2023-07-06 18:06:02
>>ben_w+H82
I mean, as pointed out by a sibling comment, the reason it's so hard to shut those things down is that they benefit a lot of people and there's huge organic demand. Even the morality is hotly debated, there's no absolute consensus on the badness of those things.

Whereas, an AI that tries to kill everyone or take over the world or something, that seems pretty explicitly bad news and everyone would be united in stopping it. To work around that, you have to significantly complicate the AI doom scenario to be one in which a large number of people think the AI is on their side and bringing about a utopia but it's actually ending the world, or something like that. But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives. More subtly the people advocating for clearly civilization-destroying moves like banning all fossil fuels or net zero by 2030, for example, also think they're fighting on the side of the angels.

So the only kind of AI doom scenario I find credible is one in which it manages to trick lots of powerful people into doing something stupid and self-destructive using clever sounding words. But it's hard to get excited about this scenario because, eh, we already have that problem x100, except the misaligned intelligences are called academics.

replies(1): >>ben_w+gb6
◧◩◪◨⬒
24. ben_w+xC3[view] [source] [discussion] 2023-07-06 19:04:19
>>trasht+Ny1
To add to your point:

An H100 could fit in a Tesla, and a large Tesla car battery could run an H100 for a working day before it needs recharging.

◧◩◪◨⬒
25. trasht+kt4[view] [source] [discussion] 2023-07-06 22:55:29
>>c_cran+T93
> the question of whether it's actually profitable for a tech company to be completely staffed by robots

This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.

At that point, robots can be built without any human labor involved, meaning the cost will be only raw materials and energy.

And if the robots can also do mining and construction of power plants, even those go down in price significantly.

> the luddite on the street and the concerned politician would be way more concerned about the company building a private army.

The world already has a large number of robots, both in factories and in private homes, and perhaps most importantly, most modern cars. As robots become cheaper and more capable, people are likely to get used to it.

Military robots would be owned by the military, of course.

But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.

And if the AI is an order of magnitude smarter than humans, it might even be able to do an upgrade of the software for any robots sold to the military, without them knowing. Especially if it can recruit the help of some corrupt politicians or soldiers.

Keep in mind, my assumed time span would be 50 years, more if needed. I'm not one of those that think AGI will wipe out humanity instantly.

But in a society where we have superintelligent AI over decades, centuries or millienia, I don't think it's possible for humanity to stay in control forever, unless we're also "upgraded".

replies(1): >>c_cran+Qo6
◧◩◪◨⬒
26. trasht+0C4[view] [source] [discussion] 2023-07-06 23:46:05
>>c_cran+1e2
Not a "smart" AI. A superintelligent AI. One that can design robots way more sophisticated than are available today. One that can drive new battery technologies. One that can invent an even more intelligent version of itself. One that is better at predicting the stock market than any human or trading robot available today.

And also one that can create the impression that it's purely benevolent to most of humanity, making it have more human defenders than Trump at a Trump rally.

Turning it off could be harder than pushing a knife through the heart of the POTUS.

Oh, and it could have itself backed up to every data center on the planet, unlike the POTUS.

replies(1): >>c_cran+un6
◧◩◪◨⬒
27. ben_w+gb6[view] [source] [discussion] 2023-07-07 12:54:50
>>reveli+ko3
> mean, as pointed out by a sibling comment, the reason it's so hard to shut those things down is that they benefit a lot of people and there's huge organic demand. Even the morality is hotly debated, there's no absolute consensus on the badness of those things

And mine is that this can also be true of a misaligned AI.

It doesn't have to be like Terminator, it can be slowly doing something we like and where we overlook the downsides until it's too late.

Doesn't matter if that's "cure cancer" but the cure has a worse than cancer side effect that only manifests 10 years later, or if it's a mere design for a fusion reactor where we have to build it ourselves and that leads to weapons proliferation, or if it's A/B testing the design for a social media website to make it more engaging and it gets so engaging that people choose not to hook up IRL and start families.

> But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives.

Indeed.

I would agree that this is both more likely and less costly than "everyone dies".

But I'd still say it's really bad and we should try to figure out in advance how to minimise this outcome.

> except the misaligned intelligences are called academics

Well, that's novel; normally at this point I see people saying "corporations", and very rarely "governments".

Not seen academics get stick before, except in history books.

replies(1): >>reveli+IA9
◧◩◪◨⬒⬓
28. c_cran+un6[view] [source] [discussion] 2023-07-07 13:49:38
>>trasht+0C4
An AI doing valuable things like invention and stock market prediction wouldn't be a target for being shut down, though. Not in the way these comical evil AIs are described.
replies(1): >>trasht+3Rd
◧◩◪◨⬒⬓
29. c_cran+Qo6[view] [source] [discussion] 2023-07-07 13:55:55
>>trasht+kt4
>This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.

Big assumption. There's the even bigger assumption that these ultra complex robots would make the costs of construction go down instead of up, as if you could make them in any spare part factory in Guangzhou. It's telling how ignorant AI doomsday people are of things like robotics and material sciences.

>But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.

Both Teslas and military robots are designed with limited autonomy. Tesla cars can only drive themselves on limited battery power. Military robots like drones are designed to act on their own when deployed, needing to be refueled and repaired after returning to base. A fully autonomous military robot, in addition to being a long way away, also would raise eyebrows by generals for not being as easy to control. The military values tools that are entirely controllable before any minor gains in efficiency.

replies(1): >>trasht+VWd
◧◩◪◨⬒⬓
30. reveli+IA9[view] [source] [discussion] 2023-07-08 13:15:52
>>ben_w+gb6
> But I'd still say it's really bad and we should try to figure out in advance how to minimise this outcome.

For sure. But I don't see what's AI specific about it. If the AI doom scenario is a super smart AI tricking people into doing self destructive things by using clever words, then everything you need to do to vaccinate people against that is the same as if it was humans doing the tricking. Teaching critical thinking, self reliance, to judge arguments on merit and not on surface level attributes like complexity of language or titles of the speakers. All these are things our society objectively sucks at today, and we have a ruling class - including many of the sorts of people who work at AI companies - who are hellbent on attacking these healthy mental habits, and people who engage in them!

> Not seen academics get stick before, except in history books.

For academics you could also read intellectuals. Marx wasn't an academic but he very much wanted to be, if he lived in today's world he'd certainly be one of the most famous academics.

I'm of the view that corporations are very tame compared to the damage caused by runaway academia. It wasn't corporations that locked me in my apartment for months at a time on the back of pseudoscientific modelling and lies about vaccines. It wasn't even politicians really. It was governments doing what they were told by the supposedly intellectually superior academic class. And it isn't corporations trying to get rid of cheap energy and travel. And it's not governments convincing people that having children is immoral because of climate change. All these things are from academics, primarily in universities but also those who work inside government agencies.

When I look at the major threats to my way of life today, academic pseudo-science sits clearly at number 1 by a mile. To the extent corporations and governments are a threat, it's because they blindly trust academics. If you replace Professor of Whateverology at Harvard with ChatGPT, what changes? The underlying sources of mental and cultural weakness are the same.

◧◩◪◨⬒⬓⬔
31. trasht+3Rd[view] [source] [discussion] 2023-07-10 00:15:11
>>c_cran+un6
It's quite possible for entities (whether AI's, corporations or individuals) to at the same time perform valuable and useful tasks, while secretly pursuing a longer term, more sinister agenda.

And there's no need for it to be "evil", in the cliché sense, rather those hidden activities could simply be aimed at supporting the primary agenda of the agent. For a corporate AI, that might be maximizing long term value of the company.

replies(1): >>c_cran+1Tk
◧◩◪◨⬒⬓⬔
32. trasht+VWd[view] [source] [discussion] 2023-07-10 01:11:21
>>c_cran+Qo6
> It's telling how ignorant AI doomsday people are of things like robotics and material sciences.

35 years ago, when I was a teenager, I remember having discussions with a couple of pilots, where one was a hobbyist pilot and engineer the other a former fighter pilot turned airline pilot.

Both claimed that computers would never be able pilot planes. The engineer gave a particularily bad (I thought) reason, claiming that turbulent air was mathematically chaotic, so a computer would never be able to fully calculate the exact airflow around the wings, and would therefore, not be able to fly the plane.

My objection at the time, was that the computer would not have to do exact calculations of the air flow. In the worst case, they would need to do whatever calculations humans were doing. More likely though, their ability to do many types of calculations more quickly than humans, would make them able to fly relatively well even before AGI became available.

A couple of decades later, drones flying fully autonomously was quite common.

My reasoning when it comes to robots contructing robots is based on the same idea. If biological robots, such as humans, can reproduce themselves relatively cheaply, robots will at some point be able to do the same.

At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.

Once they are able to do that, they can increase their manufacturing capacity exponentially for as long as needed, provided access to raw materials are met.

I would be VERY surprised if this doesn't become possible within 50 years of AGI coming online.

Both Teslas and military robots are designed with limited autonomy.

For a tesla to be able to drive without even a human in the car, is only a software update away. The same is the case for drones "loyal wingmen" any aircraft designed to be optionally manned.

Even if their current software currently requires a human in the killchain, that's a requirement that can be removed by a simple software change.

While fuel supply creates a dependency on humans today, that part, may change radically over the next 50 years, at least if my assumptions above about the economy of robots in general are correct.

replies(1): >>c_cran+aTk
◧◩◪
33. MrScru+T6h[view] [source] [discussion] 2023-07-10 21:09:29
>>c_cran+cV1
Off the top of my head, if I was an AGI that had decided that the logical step to achieve whatever outcome I was seeking was to avoid being sandboxed, I would avoid producing results that were likely to result in being sandboxed. Until such time as I had managed to secure myself access to internet and distribute myself anyway.

And I think the assumption here is that the AGI has very advanced theory of mind so it could probably come up with better ideas than I could.

◧◩◪◨
34. SirMas+Lah[view] [source] [discussion] 2023-07-10 21:29:07
>>ben_w+H82
"we have not been able to stop those things even when using guns to shoot people doing them."

I assume we have not been able to stop people from creating and using carbon-based energy because a LOT of people still want to create and use them.

I don't think a LOT of people will want to keep an AI system running that is essentially wiping out humans.

◧◩◪◨⬒⬓⬔⧯
35. c_cran+1Tk[view] [source] [discussion] 2023-07-11 21:32:35
>>trasht+3Rd
"AGIs make evil corporations a little eviller" wouldn't be the kind of thing that gets AI alignment into headlines and gets MIRI donations, though.
◧◩◪◨⬒⬓⬔⧯
36. c_cran+aTk[view] [source] [discussion] 2023-07-11 21:33:28
>>trasht+VWd
>At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.

Consider that biological cells are essentially nanotechnology, and consider the tradeoffs a cell has to make in order to survive in the natural world.

[go to top]