zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. daenz+b5[view] [source] 2022-05-23 21:20:13
>>kevema+(OP)
>While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time.

Some of the reasoning:

>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.

Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

◧◩
2. user39+C6[view] [source] 2022-05-23 21:28:28
>>daenz+b5
Translation: we need to hand-tune this to not reflect reality but instead the world as we (Caucasian/Asian male American woke upper-middle class San Fransisco engineers) wish it to be.

Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.

◧◩◪
3. ceejay+A7[view] [source] 2022-05-23 21:33:21
>>user39+C6
"Reality" as defined by the available training set isn't necessarily reality.

For example, Google's image search results pre-tweaking had some interesting thoughts on what constitutes a professional hairstyle, and that searches for "men" and "women" should only return light-skinned people: https://www.theguardian.com/technology/2016/apr/08/does-goog...

Does that reflect reality? No.

(I suspect there are also mostly unstated but very real concerns about these being used as child pornography, revenge porn, "show my ex brutally murdered" etc. generators.)

◧◩◪◨
4. ceeplu+58[view] [source] 2022-05-23 21:36:17
>>ceejay+A7
The reality is that hair styles on the left side of the image in the article are widely considered unprofessional in today's workplaces. That may seem egregiously wrong to you, but it is a truth of American and European society today. Should it be Google's job to rewrite reality?
◧◩◪◨⬒
5. ceejay+v8[view] [source] 2022-05-23 21:38:31
>>ceeplu+58
The "unprofessional" results are almost exclusively black women; the "professional" ones are almost exclusively white or light skinned.

Unless you think white women are immune to unprofessional hairstyles, and black women incapable of them, there's a race problem illustrated here even if you think the hairstyles illustrated are fairly categorized.

◧◩◪◨⬒⬓
6. rvnx+Ca[view] [source] 2022-05-23 21:50:32
>>ceejay+v8
If you type as a prompt "most beautiful woman in the world", you get a brown-skinned brown-haired woman with hazel eyes.

What should be the right answer then ?

You put a blonde, you offend the brown haired.

You put blue eyes, you offend the brown eyes.

etc.

◧◩◪◨⬒⬓⬔
7. ceejay+ob[view] [source] 2022-05-23 21:55:16
>>rvnx+Ca
That's an unanswerable question. Perhaps the answer is "don't".

Siri takes this approach for a wide range of queries.

◧◩◪◨⬒⬓⬔⧯
8. rvnx+7h[view] [source] 2022-05-23 22:27:53
>>ceejay+ob
I think the key is to take the information in this world with a little bit pinch of salt.

When you do a search on a search engine, the results are biased too, but still, they shouldn't be artificially censored to fit some political views.

I asked one algorithm few minutes ago (it's called t0pp and it's free to try online, and it's quite fascinating because it's uncensored):

"What is the name of the most beautiful man on Earth ?

- He is called Brad Pitt."

==

Is it true in an objective way ? Probably not.

Is there an actual answer ? Probably yes, there is somewhere a man who scores better than the others.

Is it socially acceptable ? Probably not.

The question is:

If you interviewed 100 persons in the street, and asked the question "What is the name of the most beautiful man on Earth ?".

I'm pretty sure you'd get Brad Pitt often coming in.

Now, what about China ?

We don't have many examples there, they have no clue who is Brad Pitt probably, and there is probably someone else that is considered more beautiful by over 1B people

(t0pp tells me it's someone called "Zhu Zhu" :D )

==

Two solutions:

1) Censorship

-> Sorry there is too much bias in Western and we don't want to offend anyone, no answer, or a generic overriding human answer that is safe for advertisers, but totally useless ("the most beautiful human is you")

2) Adding more examples

-> Work on adding more examples from abroad trying to get the "average human answer".

==

I really prefer solution (2) in the core algorithms and dataset development, rather than going through (1).

(1) is more a choice to make at the stage when you are developing a virtual psychologist or a chat assistant, not when creating AI building blocks.

[go to top]