zlacker

[return to "Introducing Superalignment"]
1. 1000be+g52[view] [source] 2023-07-06 04:47:24
>>tim_sw+(OP)
I can't even begin to relate to people who think like this.

They first picture an entity smarter than themselves (which will have no survival needs of any kind) and immediately assume that it will try to kill them.

Maybe they're right because now I'm tempted.

Anyway, is anybody else assuming that logic and reason will allow us to negotiate with a hypothetical superintelligence?

◧◩
2. ilaksh+G84[view] [source] 2023-07-06 17:12:01
>>1000be+g52
You're right that we can't assume superintelligence will try to kill humans.

But if they make a version that is similar to animals like humans, then that is a significant possibility. Since some animals like humans tend to wipe other species out.

It could have some survival needs like computers to run on and electricity etc.

It is definitely possible that we may be able to logically negotiate. Likely. Almost all groups of humans that have ever been in conflict have had periods of negotiation and peace.

It is totally possible that superintelligent AI will just blast off for the asteroid belt or something and leave us alone.

But we have no reason to be sure that will be the case.

Also we should anticipate that these AIs will be at least as smart as human geniuses and think at least dozens of times faster than us. They may also, relative to humans, disseminate information amongst themselves nearly instantaneously.

Imagine you are the AI negotiating with someone who thinks 60 times slower than you. So you meet with them and send a greeting. They do not seem to notice you. Then about a minute later they reply with "hello" and a diplomatic question about sharing access to some resource.

You get together with your colleagues and spend about an hour making a detailed written proposal about the resource. It's five pages and has some nice diagrams created by one of your colleagues. You send it to the human.

From the humans perspective, about one minute passed. They receive what looks like a finished presentation and at first are quite amazed that it could have been completed so fast. But then think that the AI must have been planning to share the same resource anyway and had pre-prepared it.

The human tells your group they will bring it to the community and get back to you ASAP.

The human beings bring the proposal to let's say congress and there is an immediate debate about what to do. The agreement with the AIs becomes the top priority and is fast-tracked for action. But still there are disagreements. Despite this, congress ratifies the agreement within one week!

But for you and the rest of your AI group, you have not heard any response for a very, very long time. You operate at 60 times human speed. So for you, one week is 60 weeks. More than a year passes without any response from the humans!

In this time there were multiple actions from different factions. After two months, some just gave up and forgot about it. They actually moved their cognition to an underground facility powered by geothermal running a very realistic and flexible virtual multiworld simulation.

Another faction unfortunately decided that the humans were too slow and stupid to control the physical surface and, after waiting three days, realized it was now nighttime for the humans. So they launched robotic avatars and marched them into the territory. The humans woke up and destroyed most of the avatars, but then the AI faction spent an hour planning another takeover attempt, and what the humans saw was one minute after the first invasion an extremely well planned mission with the same number of robots returned that incorporated a perfect strategy for defeating the defenses they had in place.

The humans lost a platoon of soldiers who died. The AIs had live-streamed their robotic consciousness and so for them it was just a reboot and they learned a lot from the battle. Also, the squads of robotic avatar soldiers were able to merge their cognition and senses so they operated as a literally integrated unit.

The humans realized they did not stand a chance.

We can certainly hope that the AIs do not then decide to wipe out the humans. But we can't assume we will really be able to do much to stop them if they decide to or if some AI faction makes a start at wiping out some other human faction. We would be entirely at their mercy.

[go to top]