zlacker

[parent] [thread] 11 comments
1. optymi+(OP)[view] [source] 2024-02-14 13:59:41
I never bought into ethical questions. It's trained on publicly available data as far as I understand. What's the most unethical thing it can do?

My experience is limited. I got it to berate me with a jailbreak. I asked it to do so, so the onus is on me to be able to handle the response.

I'm trying to think of unethical things it can do that are not in the realm of "you asked it for that information, just as you would have searched on Google", but I can only think of things like "how to make a bomb", suicide related instructions, etc which I would place in the "sharp knife" category. One has to be able to handle it before using it.

It's been increasingly giving the canned "As an AI language model ..." response for stuff that's not even unethical, just dicey, for example.

replies(1): >>al_bor+65
2. al_bor+65[view] [source] 2024-02-14 14:27:53
>>optymi+(OP)
One recent example in the news was the AI generated p*rn of Taylor Swift. From what I read, the people who made it used Bing, which is based on OpenAI’s tech.
replies(2): >>loboci+n6 >>zingel+td1
◧◩
3. loboci+n6[view] [source] [discussion] 2024-02-14 14:35:27
>>al_bor+65
This is more sensationalism than ethical issue. Whatever they did they could do, and probably do better, using publicly available tools like Stable Diffusion.
replies(1): >>majora+tb
◧◩◪
4. majora+tb[view] [source] [discussion] 2024-02-14 15:00:41
>>loboci+n6
or just photoshop. The only thing these tools did was make it easier. I don't think the AI aspect adds anything for this comparison.
replies(1): >>Anon84+Gd
◧◩◪◨
5. Anon84+Gd[view] [source] [discussion] 2024-02-14 15:11:45
>>majora+tb
An argument can be made that "more is different." By making it easier to do something, you're increasing the supply, possibly even taking something that used to be a rare edge case and making it a common occurrence, which can pose problems in and of itself.
replies(2): >>stickf+5B >>loboci+0lh
◧◩◪◨⬒
6. stickf+5B[view] [source] [discussion] 2024-02-14 16:57:06
>>Anon84+Gd
Put in a different context: The exploits are out there. Are you saying we shouldn't publish them?

Deepfakes are going to become a concern of everyday life whether you stop OpenAI from generating them or not. The cat is out of the proverbial bag. We as a society need to adjust to treating this sort of content skeptically, and I see no more appropriate way than letting a bunch of fake celebrity porn circulate.

What scares me about deepfakes is not the porn, it's the scams. The scams can actually destroy lives. We need to start ratcheting up social skepticism asap.

replies(1): >>vonjui+dI
◧◩◪◨⬒⬓
7. vonjui+dI[view] [source] [discussion] 2024-02-14 17:31:27
>>stickf+5B
You probably don't care about the porn cause I'm assuming you're a man, but it can ruin lives too.
replies(1): >>stickf+ch2
◧◩
8. zingel+td1[view] [source] [discussion] 2024-02-14 20:00:50
>>al_bor+65
You are talking like it's something bad. Kids are learning AI and computing instead of drugs and guns. And nobody is hurt.
◧◩◪◨⬒⬓⬔
9. stickf+ch2[view] [source] [discussion] 2024-02-15 02:52:02
>>vonjui+dI
It can only ruin lives if people believe it's real. Until recently, that was a reasonable belief; now it's not. People will catch on and society will adapt.

It's not like the technology is going to disappear.

replies(1): >>vonjui+tm2
◧◩◪◨⬒⬓⬔⧯
10. vonjui+tm2[view] [source] [discussion] 2024-02-15 03:48:55
>>stickf+ch2
I mean, the same applies to scams, scams only work if people believe them.
replies(1): >>stickf+dZ3
◧◩◪◨⬒⬓⬔⧯▣
11. stickf+dZ3[view] [source] [discussion] 2024-02-15 17:01:28
>>vonjui+tm2
Right - as I said, we need to ramp up social skepticism, fast. Not as in some kind of utopian vision, but "the amount of fake information will be moving from a trickle to a flood soon, there's nothing you can do about that, so brace yourselves".

The specific policies of OpenAI or Google or whatnot are irrelevant. The technology is out of the bag.

◧◩◪◨⬒
12. loboci+0lh[view] [source] [discussion] 2024-02-20 00:13:06
>>Anon84+Gd
It's more dangerous if it's uncommon. It's knowledge that protects people and not a bunch of annoying "AI safety" "researchers" selling the lie that "AI is safe". Truth is those morons only have a job because they help companies save face and create a moat around this new technology where new competitors will be required to have "AI safety" teams & solutions. What have "AI safety" achieved so far besides making models dumber and annoying to use?
[go to top]