zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. daenz+b5[view] [source] 2022-05-23 21:20:13
>>kevema+(OP)
>While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time.

Some of the reasoning:

>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.

Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

◧◩
2. devind+2d[view] [source] 2022-05-23 22:04:15
>>daenz+b5
Good lord. Withheld? They've published their research, they just aren't making the model available immediately, waiting until they can re-implement it so that you don't get racial slurs popping up when you ask for a cup of "black coffee."

>While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes

Tossing that stuff when it comes up in a research environment is one thing, but Google clearly wants to implement this as a product, used all over the world by a huge range of people. If the dataset has problems, and why wouldn't it, it is perfectly rational to want to wait and re-implement it with a better one. DALL-E 2 was trained on a curated dataset so it couldn't generate sex or gore. Others are sanitizing their inputs too and have done for a long time. It is the only thing that makes sense for a company looking to commercialize a research project.

This has nothing to do with "inability to cope" and the implied woke mob yelling about some minor flaw. It's about building a tool that doesn't bake in serious and avoidable problems.

◧◩◪
3. concor+Te[view] [source] 2022-05-23 22:13:39
>>devind+2d
I wonder why they don't like the idea of autogenerated porn... They're already putting most artists out of a job, why not put porn stars out of a job too?
◧◩◪◨
4. renewi+Ai[view] [source] 2022-05-23 22:37:53
>>concor+Te
Copenhagen ethics (used by most people) require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue. It is logical for Google to not attempt to interact with porn where possible.
[go to top]