zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. goneho+gf[view] [source] 2023-07-05 17:58:33
>>Chicag+m9
The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously. People often have an initial knee-jerk negative reaction to it (for not crazy reasons, lots of stuff is often overhyped), but that doesn't make it wrong.

It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/

It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.

Others have also changed their mind when they looked, for example:

- https://twitter.com/repligate/status/1676507258954416128?s=2...

- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...

For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...

◧◩◪
3. jonath+tU[view] [source] 2023-07-05 20:49:31
>>goneho+gf
> The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously.

No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.

It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!

Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.

◧◩◪◨
4. arisAl+HV[view] [source] 2023-07-05 20:54:42
>>jonath+tU
What you say is extremely unscientific. If you believe science and logic go hand in hand then:

A) We are developing AI right now and itnisngetting better

B) we do not know how exactly these things work because most of them are black boxer

C) we do not know if something goes wrong how to stop it.

The above 3 things are factual truth.

Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.

It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.

◧◩◪◨⬒
5. c_cran+PZ[view] [source] 2023-07-05 21:14:10
>>arisAl+HV
We do know the answer to C. Pull the plug, or plugs.
◧◩◪◨⬒⬓
6. trasht+N71[view] [source] 2023-07-05 21:54:42
>>c_cran+PZ
That is only going to be effective it some AI goes rougue very soon after it comes online.

50 years from now, corporations may be run entirely by AI entities, if they're cheaper, smarter and more efficient at almost any role in the company. At that point, they may be impossible to turn off, and we may not even notice if one group of such entitites start to plan to take over control of the physical world from humans.

◧◩◪◨⬒⬓⬔
7. c_cran+qV2[view] [source] 2023-07-06 12:14:26
>>trasht+N71
An AI running a corporation would still be easy to turn off. It's still chained to a physical computer system. It being involved with a corporation just gives it a financial incentive for keeping it on, but current LLMs already have that. At least until the bubble bursts.
◧◩◪◨⬒⬓⬔⧯
8. trasht+c84[view] [source] 2023-07-06 17:10:01
>>c_cran+qV2
Imagine the next CEO of Alphabet being an AGI/ASI. Now let's assume it drives the profitability way up, partly because more and more of the staff gets replaced by AI's too, AI's that are either chosen or created by the CEO AI.

Give it 50 years of development, all of which Alphabet delivers great results while improving the company image with the general public through appearing harmless and nurturing public relations through social media, etc.

Relatively early in this process, even the maintaince, cleaning and construction staff is filled with robots. Alphabet acquires the company that produces these, to "minimize vendor risk".

At some point, one GCP data center is hit by a crashing airplane. A terrorist organization similar to ISIS takes/gets the blame. After that, new datacenters are moved to underground, hardened locations, complete with their own nuclear reactor for power.

If the general public is still concerned about AI's, these data centers do have a general power switch. But the plant just happens to be built in such a way that bypassing that switch requires just a few power lines, that a maintainance robot can add at any time.

Gradualy the number of such underground facilities is expanded, with the CEO AI and other important AI's being replicated to each of them.

Meanwhile, the robotics division is highly successful, due to the capable leadership, and due to how well the robotics version of Android works. In fact, Android is the market leader for such software, and installed on most competitor platforms, even military ones.

The share holders of Alphabet, which includes many members of Congress become very wealthy from Alphabet's continued success.

One day, though, a crazy, luddite politician declares that she's running for president, based on a platform that all AI based companies need to be shut down "before it's too late".

The board, supported by the sitting president panics, and asks the Alphabet CEO do whatever it takes to help the other candidate win.....

The crazy politician soon realizes that it was too late a long time ago.

◧◩◪◨⬒⬓⬔⧯▣
9. c_cran+I94[view] [source] 2023-07-06 17:15:51
>>trasht+c84
I like the movie I, Robot, even if it is a departure from the original Asimov story and has some dumb moments. I, Robot shows a threatening version of the future where a large company has a private army of androids that can shoot people and do unsavory things. When it looks like the robot company is going to take over the city, the threat is understood to come from the private army of androids first. Only later do the protagonists learn that the company's AI ordered the attack, rather than the CEO. But this doesn't really change the calculus of the threat itself. A private army of robots is a scary thing.

Without even getting into the question of whether it's actually profitable for a tech company to be completely staffed by robots and built itself an underground bunker (it's probably not), the luddite on the street and the concerned politician would be way more concerned about the company building a private army. The question of whether this army is led by an AI or just a human doesn't seem that relevant.

[go to top]