zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. swatco+69[view] [source] 2023-11-22 07:00:22
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.

◧◩◪
3. mlyle+La[view] [source] 2023-11-22 07:11:27
>>swatco+69
Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.
◧◩◪◨
4. Random+Wb[view] [source] 2023-11-22 07:18:05
>>mlyle+La
Probably not a bad heuristic: unless proven, don't assume existential risk.
◧◩◪◨⬒
5. altpad+kc[view] [source] 2023-11-22 07:21:52
>>Random+Wb
Dude just think about that for a moment. By definition if existential risk has been proven. It's already too late
◧◩◪◨⬒⬓
6. Random+fd[view] [source] 2023-11-22 07:28:50
>>altpad+kc
Totally not true: take nuclear weapons, for example, or a large meteorite impact.
◧◩◪◨⬒⬓⬔
7. ludwik+ai[view] [source] 2023-11-22 08:06:15
>>Random+fd
So what do you mean when you say that the "risk is proven"?

If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.

If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.

If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.

In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.

We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.

◧◩◪◨⬒⬓⬔⧯
8. Random+Dj[view] [source] 2023-11-22 08:17:32
>>ludwik+ai
No, I mean that there is a proven way for the risk to materialise, not just some tall tale. Tall tales might(!) justify some caution, but they are a very different class of issue. Biological risks are perhaps in the latter category.

Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.

Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).

◧◩◪◨⬒⬓⬔⧯▣
9. ludwik+Vq[view] [source] 2023-11-22 09:17:53
>>Random+Dj
"there is a proven way for the risk to materialise" - I still don't know what this means. "Proven" how?

Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?

You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").

[go to top]