zlacker

[parent] [thread] 2 comments
1. adamsm+(OP)[view] [source] 2023-03-01 17:10:39
They are releasing powerful AI tools at an alarming rate before Safety, and I mean real Safety, Researchers have a chance to understand their implication. They are generating an enormous amount of buzz and hype which is fueling a coming AI arms race that is extremely dangerous. The Control Problem is very real and becoming more pressing as things accelerate. Sam has recently given lip service to caring about the problem but OpenAI's actions seem to indicate it's not a major priority. There was a hint that they cared when they thought GPT-2 was too dangerous to release publicly but at this point if they were serious about safety no model past ChatGPT and Bing would be released to the public at all, full stop.

https://openai.com/blog/planning-for-agi-and-beyond/

Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

replies(1): >>Fillig+Ff
2. Fillig+Ff[view] [source] 2023-03-01 18:03:23
>>adamsm+(OP)
> Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.

Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.

I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...

replies(1): >>adamsm+pO
◧◩
3. adamsm+pO[view] [source] [discussion] 2023-03-01 20:43:05
>>Fillig+Ff
Yeah I mean a significant problem is that many(most?) people including those in the field until very recently thought AIs with these capabilities were many decades if not centuries away and now that people can see the light at the end of the tunnel there is massive Geo-political and Economic incentive to be the first to create one. We think OpenAI vs Deepmind vs Anthropic vs etc. is bad but wait until it's US vs China and we stop talking about billion dollar investments in AI research and get into the Trillions.

Scott's Exxon analogy is almost too bleak to really believe. I hope OpenAI is just ignorant and not intentionally evil.

[go to top]