zlacker

OpenAI's Long-Term AI Risk Team Has Disbanded

submitted by robbie+(OP) on 2024-05-17 15:16:38 | 100 points 57 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. netsec+54[view] [source] 2024-05-17 15:41:23
>>robbie+(OP)
https://archive.is/gEjjA
◧◩
8. joaogu+C5[view] [source] [discussion] 2024-05-17 15:50:35
>>398968+F4
The ads are definitely coming given their pitch deck for the data partnerships https://www.adweek.com/media/openai-preferred-publisher-prog...
◧◩◪
15. square+68[view] [source] [discussion] 2024-05-17 16:04:56
>>Hasu+U5
> but when the company says, "We're getting rid of the team that makes sure we don't kill everyone", there is a message being sent

Hard not to imagine a pattern if one considers what they did a few months ago:

https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-o...

◧◩
21. reduce+ua[view] [source] [discussion] 2024-05-17 16:18:53
>>mgdev+B4
Did you read Jan Leike's resignation? https://x.com/janleike/status/1791498174659715494

I hope others see that there are two extremely intelligent sides, but one has mega $$ to earn and the other is pleading that there are dangers ahead and not to follow the money and fame.

This is climate change and oil companies all over again, and just like then and now, oil companies are winning.

Fundamentally, many people are the first stage, denial. Staring down our current trajectory of AGI is one of the darkest realities to imagine and that is not pleasant to grapple with.

◧◩
22. reduce+Ka[view] [source] [discussion] 2024-05-17 16:20:35
>>goeied+n5
https://scholar.google.co.uk/citations?user=beiWcokAAAAJ&hl=...
◧◩◪
24. reduce+Zb[view] [source] [discussion] 2024-05-17 16:27:35
>>nprate+J9
Ilya and Jan Leike[0] resigned (were fired) because they believed their jobs were a temporary marketing expense? Or maybe you think you understand the risks of AGI better than them, the creators of the frontier models?

Do you think this is a coherent world view? Compared to the other one staring you in the face? I'll leave it to the reader whether they want to believe this conspiratorial take in line with profit-motive instead of the scientists saying:

“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

[0] https://scholar.google.co.uk/citations?user=beiWcokAAAAJ&hl=...

◧◩◪
26. mgdev+wd[view] [source] [discussion] 2024-05-17 16:34:54
>>reduce+ua
I hadn't until now, but it's the other failure mode I mentioned in another fork [1] of this thread:

> A small, central R&D team may work with management to set the bar, but they can't be responsible for mitigating the risk on the ground - and they shouldn't be led to believe that that is their job. It never works, and creates bad team dynamics. Either the central team goes too far, or they feel ignored. (See: security, compliance.)

[1]: >>40391283

◧◩
31. encode+wg[view] [source] [discussion] 2024-05-17 16:53:09
>>mgdev+B4
Here’s the former team leaders take. Says they couldn’t get compute resources to do their job.

https://x.com/janleike/status/1791498174659715494

◧◩
40. dole+7t[view] [source] [discussion] 2024-05-17 18:08:45
>>sklarg+D4
"When you shoot at the king and miss, things tend to get awkward."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai...

41. ChrisA+lt[view] [source] 2024-05-17 18:10:04
>>robbie+(OP)
Related:

Jan Leike's OpenAI departure statement

>>40391412

◧◩◪◨⬒
49. reduce+3a1[view] [source] [discussion] 2024-05-17 23:55:10
>>tim333+GV
> We could always stop paying for the servers, or their electricity.

This is satire, right? No one saying this or "off button" has thought this difficult problem through longer than 30 minutes.

https://youtu.be/_8q9bjNHeSo?si=a7PAHtiuDIAL2uQD&t=4817

"Can we just turn it off?"

"It has thought of that. It will not give you a sign that makes you want to turn it off before it is too late to do that."

[go to top]