zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. goneho+gf[view] [source] 2023-07-05 17:58:33
>>Chicag+m9
The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously. People often have an initial knee-jerk negative reaction to it (for not crazy reasons, lots of stuff is often overhyped), but that doesn't make it wrong.

It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/

It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.

Others have also changed their mind when they looked, for example:

- https://twitter.com/repligate/status/1676507258954416128?s=2...

- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...

For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...

◧◩◪
3. jonath+tU[view] [source] 2023-07-05 20:49:31
>>goneho+gf
> The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously.

No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.

It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!

Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.

◧◩◪◨
4. arisAl+HV[view] [source] 2023-07-05 20:54:42
>>jonath+tU
What you say is extremely unscientific. If you believe science and logic go hand in hand then:

A) We are developing AI right now and itnisngetting better

B) we do not know how exactly these things work because most of them are black boxer

C) we do not know if something goes wrong how to stop it.

The above 3 things are factual truth.

Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.

It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.

◧◩◪◨⬒
5. mptest+P01[view] [source] 2023-07-05 21:17:48
>>arisAl+HV
All of this discussion really makes me think of Robert Miles "Is ai safety a Pascal's mugging?" from 4 years(!) ago[0]. All of this discussion has been had by Ai safety researchers for years in my layman understanding... Maybe we can look to them for insight in to these questions?

[0] https://youtu.be/JRuNA2eK7w0

◧◩◪◨⬒⬓
6. lcnPyl+7v1[view] [source] 2023-07-06 00:17:57
>>mptest+P01
At this point, with so many of them disagreeing and with so many varying details, one will choose the expert insight which most closely matches their current beliefs.

I hadn’t encountered Pascal’s mugging (https://en.wikipedia.org/wiki/Pascal%27s_mugging) before and the premise is indeed pretty apt. I think I’m on the side that it’s not, assuming the idea is that it’s a Very Low Chance of a Very Bad Thing -- the “muggee” wants to give their wallet on the chance of the VBT because of the magnitude of its effect. It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.

But maybe some Mass Effect nonsense will happen if we develop AGI and we’ll be approached by The Intergalactic Community and have our technology advanced millennia overnight. (Sorry, that’s tongue-in-cheek but it does kinda read like Pascal’s mugging in the opposite direction; however, that’s not really what most researchers are arguing.)

◧◩◪◨⬒⬓⬔
7. arisAl+Mo2[view] [source] 2023-07-06 07:47:26
>>lcnPyl+7v1
Moreover the most important thing people that deny risk is the following:

It doesn't matter at all if experts disagree. Even a 30% chance we all die is enough to treat it as 100%. We should not care at all if 51% think it's a non issue.

◧◩◪◨⬒⬓⬔⧯
8. dns_sn+jw2[view] [source] 2023-07-06 08:50:01
>>arisAl+Mo2
This is such a ridiculous take. Make up a hand-waving doomsday scenario, assign an arbitrarily large probability to it happening and demand that people take it seriously because we're talking about human extinction, after all. If it looks like a cult and quacks like a cult, it's probably a cult.

If nothing else, it's a great distraction from the very real societal issues that AI is going to create in the medium to long term, for example inscrutable black box decision-making and displacement of jobs.

◧◩◪◨⬒⬓⬔⧯▣
9. arisAl+Cg3[view] [source] 2023-07-06 14:04:12
>>dns_sn+jw2
What are you on about? The technology we are talking about is created by 3 labs and all 3 assign a large probability. How can you refute this with what kind of credentials and science?
[go to top]