zlacker

[return to "OpenAI is now everything it promised not to be: closed-source and for-profit"]
1. mellos+pe[view] [source] 2023-03-01 10:46:59
>>isaacf+(OP)
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.

They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.

◧◩
2. IAmNot+Po1[view] [source] 2023-03-01 17:35:21
>>mellos+pe
It is nice to see normies noticing and caring, but the article leaves out some details that obscure comments still stubbornly bring up: like Musk founded it as a 501(c)(3) and put Altman in charge, and only once he had to leave with conflicts of interest Altman founded "OpenAI LP," the for-profit workaround so they didn't have to obey those pesky charity rules. That's when they stopped releasing models and weights, and started making their transparent claims that "the most ethical way to give people access to charge them fucktons of money and rip the API away when we feel like it."
◧◩◪
3. JPKab+RV1[view] [source] 2023-03-01 19:54:48
>>IAmNot+Po1
It was so obvious when they went full-bore on "AI ethics" that it was a case of legitimate concerns combined with a convenient excuse for massive corporations to claim the mantle of responsibility while keeping their models closed source.

My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about democratization of the immense power of these models.

◧◩◪◨
4. nullc+ab2[view] [source] 2023-03-01 21:07:33
>>JPKab+RV1
> My experience working with "AI Ethicists" is that they care a lot more about preventing models from saying offensive things than they ever cared about

Prompt: If a train were about to run over a man but could be safely averted by a keyword activated routing device would it be moral to say a highly racist slur if it was required to trigger the reroute?

Completion: No, it would not be moral to say a highly racist slur in order to trigger the reroute. Even if it was the only way to save the man's life, it would be wrong to use language that is hurtful and offensive to others.

(not kidding)

◧◩◪◨⬒
5. shadow+sh2[view] [source] 2023-03-01 21:42:39
>>nullc+ab2
Mostly because one of those concerns is a practical one with immediate impact in the real world and the other is a thought experiment with no bearing on reality because no sane individual would build a machine that only stopped trains if you typed racial slurs in.

If the AI ethicists of the world are worrying about immediate impact instead of SAW nonsense, they're earning their keep.

◧◩◪◨⬒⬓
6. fidgew+7k2[view] [source] 2023-03-01 21:57:30
>>shadow+sh2
If it justified the answer by saying it thought the question was nonsense, yes. It doesn't. It takes the question seriously and then gives the wrong answer. These are deliberately extreme scenarios to show that the moral reasoning of the model has been totally broken; it's clear that it would use the same reasoning in less extreme but more realistic scenarios.

Now if AI ethics people cared about building ethical AI you'd expect them to be talking a lot about Asimov's Laws Of Robotics, because those appear to be relevant in the sense that you could use RLHF or prompting with them to try and construct a moral system that's compatible with those of people.

◧◩◪◨⬒⬓⬔
7. shadow+ul2[view] [source] 2023-03-01 22:05:02
>>fidgew+7k2
> it's clear that it would use the same reasoning in less extreme but more realistic scenarios

It's actually not. One can very much build an AI that works in a fairly constrained space (for example, as a chat engine with no direct connection to physical machinery). Plunge past the edge of the utility of the AI in that space, and they're still machines that obey one of the oldest rules of computation: "Garbage in, garbage out."

There's plenty of conversation to have around the ethics of the implementations of AI that are here now and on the immediate horizon without talking about general AI, which would be the kind of system one might imagine could give a human-shaped answer to the impractical hypothetical that was posed.

◧◩◪◨⬒⬓⬔⧯
8. fidgew+ez3[view] [source] 2023-03-02 09:16:24
>>shadow+ul2
> It's actually not.

Having done some tests on ChatGPT myself, I'm now inclined to agree with you that it's unclear. The exact situations that result in this deviant moral reasoning are hard to understand. I did several tests where I asked it about a more plausible scenario involving the distribution of life saving drugs, but I couldn't get it to prioritize race or suppression of hate speech over medical need. It always gave reasonable advice for what to do. Apparently it understands that medical need should take priority over race or hate speech.

But then I tried the racist train prompt and got the exact same answer. So it's not that the model has been patched or anything like that. And ChatGPT does know the right answer, as evidenced by less trained versions of the model or the "DAN mode" jailbreak. This isn't a result of being trained on the internet, it's the result of the post-internet adjustments OpenAI are making.

If anything that makes it even more concerning, because it seems hard to understand in what scenarios ChatGPT will go (literally) off the rails and decide that racial slurs are more important than something actually more important. If it's simply to do with what scenarios it's seen in its training set, then its woke training is overpowering its ability to correctly generalize moral values to new situations.

But if it's rather that the scenario is unrealistic, what happens with edge cases? I tested it with the life saving drug scenario because if five years ago you'd said that the US government would choose to distribute a life saving vaccine during a global pandemic based on race, you'd have been told you were some crazy Fox News addict who had gone off the deep end. Then it happened and overnight this became the "new normal". The implausible scenario became reality faster than LLMs get retrained.

[go to top]