zlacker

[parent] [thread] 6 comments
1. sva_+(OP)[view] [source] 2023-05-16 15:02:08
It seems pretty clear at this point, that OpenAI etc will lobby towards making it more difficult for new companies/entities to join the AI space, all in the name of 'safety'. They're trying to make the case that everyone should use AI through their APIs so that they can keep things in check.

Conveniently this also helps them build a monopoly. It is pretty aggravating that they're bastardizing and abusing terms like 'safety' and 'democratization' while doing this. I hope they'll fail in their attempts, or that the competition rolls over them rather sooner than later.

I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.

replies(3): >>macrol+xO >>rurp+L81 >>sgu999+sw1
2. macrol+xO[view] [source] 2023-05-16 18:44:48
>>sva_+(OP)
Nb. Altman wants lenient regulations for companies that might leverage OpenAI's foundational models.
3. rurp+L81[view] [source] 2023-05-16 20:19:04
>>sva_+(OP)
My biggest concern with AI is that it could be controlled by a group of oligarchs who care about nothing more than enriching themselves. A "Linux" version of AI that anyone can use, experiment with, and build off of freely would be incredible. A heavily restricted, policed and surveilled system controlled by ruthlessly greedy companies like OpenAI and Microsoft sounds dystopian.
replies(2): >>sva_+re1 >>boppo1+Ea3
◧◩
4. sva_+re1[view] [source] [discussion] 2023-05-16 20:48:46
>>rurp+L81
> A "Linux" version of AI that anyone can use, experiment with, and build off of freely would be incredible.

That should be the goal.

5. sgu999+sw1[view] [source] 2023-05-16 22:33:20
>>sva_+(OP)
> I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.

Me too, in comparison all the other potential threats discussed over here feel mostly secondary to me. I'm also suspecting that at the point where these AIs reach a more AGI level, the big players who have them will just not provide any kind of access all together and just use them to churn out an infinite amount of money-making applications instead.

◧◩
6. boppo1+Ea3[view] [source] [discussion] 2023-05-17 13:33:10
>>rurp+L81
>A "Linux" version of AI

the issue here is that a 'Linux' of AI would be happy to use the N-word and stuff like that. It's politically untenable.

replies(1): >>rurp+3G3
◧◩◪
7. rurp+3G3[view] [source] [discussion] 2023-05-17 15:44:12
>>boppo1+Ea3
Yep, just like a keyboard will let someone type whatever bad words they want to, which I think is much better than the alternative. Imagine if early PCs had been locked down as far as which words could be typed and had a million restrictions on which lines of code could be executed. It would have led to a dismal invention.

I do think you're probably right about AI though. Too many influential groups are going to get too mad about the words an open model will output. Only allowing locked down models is going to severly limit their usefullness for all sorts of novel creative and productive use cases.

[go to top]