zlacker

[parent] [thread] 2 comments
1. mindsl+(OP)[view] [source] 2023-05-22 18:43:24
Giving Altman the benefit of the doubt would be a lot more plausible if OpenAI actually released their products rather than presenting them as locked down web services [0], and if they didn't continually use this alarmist word "safety" to describe things like preventing an LLM from writing things that could cause political controversy. They're so obviously missing the larger picture in favor of their own business interests, that it's impossible to consider these grandiose calls for regulation to be anything but plays for regulatory capture.

[0] I can't even play with ChatGPT any more, even though I had acquiesced to giving them my phone number. Now they've seemingly added IP-based discrimination, in line with the common lust for ever more control.

replies(1): >>hammyh+47
2. hammyh+47[view] [source] 2023-05-22 19:24:18
>>mindsl+(OP)
But surely, if safety is an issue, releasing them in the capacity that you describe would be a far greater problem?
replies(1): >>mindsl+6e
◧◩
3. mindsl+6e[view] [source] [discussion] 2023-05-22 20:01:55
>>hammyh+47
Releasing their models for direct use would make any actual problems present themselves sooner, before more advanced models are created that intensify those problems. Right now the stance is basically going full speed ahead on creating the thing that might be a problem, while they're going to "solve" it with bespoke content-based filters and banning users. That is the setup for green lighting problematic-but-profitable uses - ie bog standard corporate behavior.
[go to top]