zlacker

[parent] [thread] 3 comments
1. mlinse+(OP)[view] [source] 2023-05-22 18:12:21
In the scenario where the current AI boom takes us all the way to AGI in the next decade, IMO there is little downside. Risks are very large, OpenAI/Sam have expertise, and their novel corporate structure, while far from completely-removing themselves from self-centered motives, sounds better than a typical VC funded startup that has to turn a huge profit in X years.

In the scenario where the current wave fizzles out and we have another AI winter, one risk is that we'll be left with a big regulatory apparatus that makes the next wave of innovations, the one that might actually get us all the way to an algined-AGI utopia, near-impossible. And the regulatory apparatus will now be shaped by an org with ties to the current AI wave (imagine the Department of AI Safety was currently staffed by people trained/invested in Expert Systems or some old-school paradigm).

replies(1): >>__loam+Bb
2. __loam+Bb[view] [source] 2023-05-22 19:10:20
>>mlinse+(OP)
When we have 50% of AI engineers saying there's at least a 10% chance this technology can cause our extinction, it's completely laughable to think this technology can continue without a regulatory framework. I don't think OpenAI should get to decide what that framework is, but if this stuff is even 20% as dangerous as a lot of people in the field are saying it is, it obviously needs to be regulated.
replies(2): >>strbea+xp >>skille+2r5
◧◩
3. strbea+xp[view] [source] [discussion] 2023-05-22 20:26:34
>>__loam+Bb
What are the scenarios in which this would cause our extinction, and how would regulation prevent those scenarios?
◧◩
4. skille+2r5[view] [source] [discussion] 2023-05-24 08:56:02
>>__loam+Bb
You do realise it is possible to unplug something, right?
[go to top]