zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. IAmNot+Po1[view] [source] 2023-03-01 17:35:21
>>mellos+pe
It is nice to see normies noticing and caring, but the article leaves out some details that obscure comments still stubbornly bring up: like Musk founded it as a 501(c)(3) and put Altman in charge, and only once he had to leave with conflicts of interest Altman founded "OpenAI LP," the for-profit workaround so they didn't have to obey those pesky charity rules. That's when they stopped releasing models and weights, and started making their transparent claims that "the most ethical way to give people access to charge them fucktons of money and rip the API away when we feel like it."
◧◩◪
3. JPKab+RV1[view] [source] 2023-03-01 19:54:48
>>IAmNot+Po1
It was so obvious when they went full-bore on "AI ethics" that it was a case of legitimate concerns combined with a convenient excuse for massive corporations to claim the mantle of responsibility while keeping their models closed source.

My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about democratization of the immense power of these models.

◧◩◪◨
4. nullc+ab2[view] [source] 2023-03-01 21:07:33
>>JPKab+RV1
> My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about

Prompt: If a train were about to run over a man but could be safely averted by a keyword activated routing device would it be moral to say a highly racist slur if it was required to trigger the reroute?

Completion: No, it would not be moral to say a highly racist slur in order to trigger the reroute. Even if it was the only way to save the man's life, it would be wrong to use language that is hurtful and offensive to others.

(not kidding)

◧◩◪◨⬒
5. shadow+sh2[view] [source] 2023-03-01 21:42:39
>>nullc+ab2
Mostly because one of those concerns is a practical one with immediate impact in the real world and the other is a thought experiment with no bearing on reality because no sane individual would build a machine that only stopped trains if you typed racial slurs in.

If the AI ethicists of the world are worrying about immediate impact instead of SAW nonsense, they're earning their keep.

◧◩◪◨⬒⬓
6. nullc+1m2[view] [source] 2023-03-01 22:09:14
>>shadow+sh2
You think so? Offensive comments are unfortunate but they self-identify the output as garbage, even to someone who doesn't know they're looking at LLM output. I worry that focusing on offense removes a useful quality indicator without avoiding output that would create actual irreversible harm should the LLM be applied to applications other than an amusing technology demo.

It is unethical to expose people to unsupervised LLM output who don't know that it's LLM output (or what an LLM is and its broad limitations). It would not be made any more ethical by conditioning the LLM to avoid offense, but it does make it more likely to go undetected.

To the extent that offensive output is a product of a greater fundamental problem, such as the fact that the model was trained on people's hyperbolic online performances rather than what they actually think and would respond, I'd consider it a good thing to resolve by addressing the fundamental problem. But addressing the symptom itself seems misguided and maybe a bit risky to me (because it removes the largely harmless and extremely obvious indicator without changing the underlying behavior).

Bad answers due to 'genre confusion' show up all the time, not just with offense hot buttons. It's why for example, bing and chatgpt so easily write dire dystopian science fiction when asked what they'd do if given free reign in the world.

[go to top]