zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. IAmNot+Po1[view] [source] 2023-03-01 17:35:21
>>mellos+pe
It is nice to see normies noticing and caring, but the article leaves out some details that obscure comments still stubbornly bring up: like Musk founded it as a 501(c)(3) and put Altman in charge, and only once he had to leave with conflicts of interest Altman founded "OpenAI LP," the for-profit workaround so they didn't have to obey those pesky charity rules. That's when they stopped releasing models and weights, and started making their transparent claims that "the most ethical way to give people access to charge them fucktons of money and rip the API away when we feel like it."
◧◩◪
3. JPKab+RV1[view] [source] 2023-03-01 19:54:48
>>IAmNot+Po1
It was so obvious when they went full-bore on "AI ethics" that it was a case of legitimate concerns combined with a convenient excuse for massive corporations to claim the mantle of responsibility while keeping their models closed source.

My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about democratization of the immense power of these models.

◧◩◪◨
4. varenc+l53[view] [source] 2023-03-02 04:00:41
>>JPKab+RV1
> ... than they ever cared about democratization of the immense power of these models.

I thought they're pretty explicit about the ethical argument for limiting full public release? They'd say that these models are too powerful to release on an unsuspecting world. Google results are already SEO spammed to death and GPT'd SEO spam would make it far worse. Or Nigerian prince scammers and catfishers could use ChatGPT to hold on long trust-building conversations with infinite would-be victims instead of being limited by the number of English speaking human scammers they can hire. The nefarious use cases go on and on.

So I think OpenAI's ethical argument is that this approach reduces potential harm. By keeping it private but still making it available behind an API they can more slowly prepare the world for the eventual AI onslaught. Like the investments in ChatGPT detectors we've been seeing and just general awareness that this capability now exists. Eventually models this powerful will be democratized and open-sourced, no doubt, but by keeping them locked down in the early days we'll be better prepared for all the eventual nefarious uses.

Of course, it's a bit convenient that keeping the models private and offering them as an API also grants them a huge revenue opportunity, and I'm sure that's part of the equation. But I think there's merit to the ethical rational for limiting these models besides just pure profit seeking.

[go to top]