zlacker

[parent] [thread] 10 comments
1. devind+(OP)[view] [source] 2022-05-23 22:04:15
Good lord. Withheld? They've published their research, they just aren't making the model available immediately, waiting until they can re-implement it so that you don't get racial slurs popping up when you ask for a cup of "black coffee."

>While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes

Tossing that stuff when it comes up in a research environment is one thing, but Google clearly wants to implement this as a product, used all over the world by a huge range of people. If the dataset has problems, and why wouldn't it, it is perfectly rational to want to wait and re-implement it with a better one. DALL-E 2 was trained on a curated dataset so it couldn't generate sex or gore. Others are sanitizing their inputs too and have done for a long time. It is the only thing that makes sense for a company looking to commercialize a research project.

This has nothing to do with "inability to cope" and the implied woke mob yelling about some minor flaw. It's about building a tool that doesn't bake in serious and avoidable problems.

replies(1): >>concor+R1
2. concor+R1[view] [source] 2022-05-23 22:13:39
>>devind+(OP)
I wonder why they don't like the idea of autogenerated porn... They're already putting most artists out of a job, why not put porn stars out of a job too?
replies(3): >>notaha+x5 >>renewi+y5 >>colinm+Ba
◧◩
3. notaha+x5[view] [source] [discussion] 2022-05-23 22:37:32
>>concor+R1
There's definitely a market for autogenerated porn. But automated porn in a Google branded model for general use around stuff that isn't necessarily intended to be pornographic, on the other hand...
replies(1): >>astran+C9
◧◩
4. renewi+y5[view] [source] [discussion] 2022-05-23 22:37:53
>>concor+R1
Copenhagen ethics (used by most people) require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue. It is logical for Google to not attempt to interact with porn where possible.
replies(1): >>dragon+I6
◧◩◪
5. dragon+I6[view] [source] [discussion] 2022-05-23 22:45:24
>>renewi+y5
> Copenhagen ethics (used by most people)

The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.

> require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue.

The conclusion in the final sentence only makes sense if you use “interact” in an incorrect way describing the Copenhagen interpretation of ethics, because the original description is only correct if you include observation as an interaction. By the time you have noted a thing is “high-negativity”, you have observed it and acquired responsibility for it's continuation under the Copenhagen interpretation; you cannot avoid that by choosing not to interact once you have observed it.

replies(2): >>renewi+q7 >>Poigna+A81
◧◩◪◨
6. renewi+q7[view] [source] [discussion] 2022-05-23 22:49:30
>>dragon+I6
I'm sure you are capable of steelmanning the argument.
replies(2): >>dragon+Ln >>quickt+OV
◧◩◪
7. astran+C9[view] [source] [discussion] 2022-05-23 23:08:40
>>notaha+x5
That’s a difficult product because porn is very personalized and if the product is just a little off in latent space it’s going to turn you off.

Also, people have been commenting assuming Google doesn’t want to offend their users or non-users, but they also don’t want to offend their own staff. If you run a porn company you need to hire people okay with that from the start.

◧◩
8. colinm+Ba[view] [source] [discussion] 2022-05-23 23:14:38
>>concor+R1
Same reason pornhub is a top 10 most visited website but barely makes any money. Being associated with porn is not good for business.
◧◩◪◨⬒
9. dragon+Ln[view] [source] [discussion] 2022-05-24 01:06:37
>>renewi+q7
The problem is that, were I inclined to do that, anything I would adjust to make it more true also makes it less relevant.

“There exists an ethical framework—not the Copenhagen interpretation —to which some minority of the population adheres in which trying and failing to a correct a problem incurs retroactive blame for the existence of the problem but seeing it and just saying ‘sucks, but not my problem’ does not,“ is probably true, but not very relevant.

It's logical for Google to avoid involvement with porn, and to be seen doing so, because even though porn is popular involvement with it is nevertheless politically unpopular, and Google’s business interest is in not making itself more attractive as a political punching bag. The popularity of Copenhagen ethics (or their distorted cousins) don't really play into it, just self interest.

◧◩◪◨⬒
10. quickt+OV[view] [source] [discussion] 2022-05-24 07:09:56
>>renewi+q7
Maybe: Most peoples morals require that all negative outcomes of a thing X become yours if you interact with X.

I am not sure of the evidence but that would seem almost right.

Except for, for example a story I read where a couple lost their housing deposit due to a payment timing issue. They used a lawyer and were not doing anything “fancy” like buying via a holding company. They interacted with “buying a house”, so is this just tough shit because they interacted with X.

That sounds like the original Bitcoin “not your keys not your coin” kind of morality.

I don’t think I can figure out the steel man.

◧◩◪◨
11. Poigna+A81[view] [source] [discussion] 2022-05-24 09:15:10
>>dragon+I6
> The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.

I don't have any evidence, but my personal experience is that it feels correct, at least on the internet.

People seem to have a "you touch it, you take responsibility for it" mindset regarding ethical issues. I think it's pretty reasonable to assume that Google execs are assuming "If anything bad happens because of AI, we'll be blamed for it".

[go to top]