zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. daenz+b5[view] [source] 2022-05-23 21:20:13
>>kevema+(OP)
>While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time.

Some of the reasoning:

>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.

Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

◧◩
2. user39+C6[view] [source] 2022-05-23 21:28:28
>>daenz+b5
Translation: we need to hand-tune this to not reflect reality but instead the world as we (Caucasian/Asian male American woke upper-middle class San Fransisco engineers) wish it to be.

Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.

◧◩◪
3. ceejay+A7[view] [source] 2022-05-23 21:33:21
>>user39+C6
"Reality" as defined by the available training set isn't necessarily reality.

For example, Google's image search results pre-tweaking had some interesting thoughts on what constitutes a professional hairstyle, and that searches for "men" and "women" should only return light-skinned people: https://www.theguardian.com/technology/2016/apr/08/does-goog...

Does that reflect reality? No.

(I suspect there are also mostly unstated but very real concerns about these being used as child pornography, revenge porn, "show my ex brutally murdered" etc. generators.)

◧◩◪◨
4. rvnx+n9[view] [source] 2022-05-23 21:43:41
>>ceejay+A7
If your query was about hairstyle, why do you even look or care about the skin color ?

Nowhere there is any precision for a preferred skin color in the query of th user.

So it sorts and gives the most average examples based on the examples that were found on the internet.

Essentially answering the query "SELECT * FROM `non-professional hairstyles` ORDER BY score DESC LIMIT 10".

It's like if you search on Google "best place for wedding night".

You may get 3 places out of 10 in Santorini, Greece.

Yes you could have an human remove these biases because you feel that Sri Lanka is the best place for a wedding, but what if there is a consensus that Santorini is really the most appraised in the forums or websites that were crawled by Google ?

◧◩◪◨⬒
5. jayd16+tb[view] [source] 2022-05-23 21:55:29
>>rvnx+n9
The results are not inherently neutral because the database is from non-neutral input.

It's a simple case of sample bias.

[go to top]