zlacker

[parent] [thread] 11 comments
1. immich+(OP)[view] [source] 2023-05-22 17:55:52
Is there any downside whatsoever for OpenAI/Sam to be the one proposing/leading the calls for regulation? Cynics will say they are trying to pull the ladder up from underneath them, so this is massively beneficial for them. What's the downside (if any)? Getting a small subset of the community mad doesn't seem like a lot of downside.
replies(4): >>A4ET8a+a1 >>adastr+T1 >>mhb+52 >>mlinse+S2
2. A4ET8a+a1[view] [source] 2023-05-22 18:02:55
>>immich+(OP)
<< What's the downside (if any)?

Compliance is still a burden. Even if you write the law, eventually the bureaucracy that was willed into existence will start living its own life, imposing new restrictions and basically making your life miserable. Still, the profits from keeping the field restricted to a small circle of largely non-competitors helps to offset that.

3. adastr+T1[view] [source] 2023-05-22 18:06:23
>>immich+(OP)
Then they’d be engineering their own eventual demise. Any regulatory capture regime ends up stalling progress and bloating incumbents until eventually a nimble competitor is able to circumvent regulators and steal their lunch money.
replies(1): >>namari+f9
4. mhb+52[view] [source] 2023-05-22 18:07:44
>>immich+(OP)
Whether there's a downside is moot since no one knows how to do any sort of regulation effectively.

That being said, I don't know why you think that only a small community will see this as self-serving. It's not subtle even though it may be unavoidable.

replies(1): >>immich+Lb
5. mlinse+S2[view] [source] 2023-05-22 18:12:21
>>immich+(OP)
In the scenario where the current AI boom takes us all the way to AGI in the next decade, IMO there is little downside. Risks are very large, OpenAI/Sam have expertise, and their novel corporate structure, while far from completely-removing themselves from self-centered motives, sounds better than a typical VC funded startup that has to turn a huge profit in X years.

In the scenario where the current wave fizzles out and we have another AI winter, one risk is that we'll be left with a big regulatory apparatus that makes the next wave of innovations, the one that might actually get us all the way to an algined-AGI utopia, near-impossible. And the regulatory apparatus will now be shaped by an org with ties to the current AI wave (imagine the Department of AI Safety was currently staffed by people trained/invested in Expert Systems or some old-school paradigm).

replies(1): >>__loam+te
◧◩
6. namari+f9[view] [source] [discussion] 2023-05-22 18:39:51
>>adastr+T1
Humans are temporary beings, we only need temporary wealth.
replies(1): >>adastr+Zm
◧◩
7. immich+Lb[view] [source] [discussion] 2023-05-22 18:52:07
>>mhb+52
To clarify, everyone will see this as self-serving? But I don't think most people will do anything concrete about it. At most -- even the hardcore haters will just complain loudly on Twitter. How many people would purposely choose to not use an OpenAI product? Very few IMO.
replies(1): >>mhb+Xe
◧◩
8. __loam+te[view] [source] [discussion] 2023-05-22 19:10:20
>>mlinse+S2
When we have 50% of AI engineers saying there's at least a 10% chance this technology can cause our extinction, it's completely laughable to think this technology can continue without a regulatory framework. I don't think OpenAI should get to decide what that framework is, but if this stuff is even 20% as dangerous as a lot of people in the field are saying it is, it obviously needs to be regulated.
replies(2): >>strbea+ps >>skille+Ut5
◧◩◪
9. mhb+Xe[view] [source] [discussion] 2023-05-22 19:13:31
>>immich+Lb
Ah yes. I agree no one will boycott OpenAI or something like that and that wouldn't stop its competitors anyway. That's why any optimism about the outcome seems unwarranted. All the incentives are aligned for developing better AI as quickly as possible. It's almost certainly being developed clandestinely as well, so, arguably, it may be good for OpenAI to get there first.
◧◩◪
10. adastr+Zm[view] [source] [discussion] 2023-05-22 19:57:23
>>namari+f9
If that were true, Sam would never have founded OpenAI. He already had generational wealth.
◧◩◪
11. strbea+ps[view] [source] [discussion] 2023-05-22 20:26:34
>>__loam+te
What are the scenarios in which this would cause our extinction, and how would regulation prevent those scenarios?
◧◩◪
12. skille+Ut5[view] [source] [discussion] 2023-05-24 08:56:02
>>__loam+te
You do realise it is possible to unplug something, right?
[go to top]