Some of the reasoning:
>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.
>While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes
Tossing that stuff when it comes up in a research environment is one thing, but Google clearly wants to implement this as a product, used all over the world by a huge range of people. If the dataset has problems, and why wouldn't it, it is perfectly rational to want to wait and re-implement it with a better one. DALL-E 2 was trained on a curated dataset so it couldn't generate sex or gore. Others are sanitizing their inputs too and have done for a long time. It is the only thing that makes sense for a company looking to commercialize a research project.
This has nothing to do with "inability to cope" and the implied woke mob yelling about some minor flaw. It's about building a tool that doesn't bake in serious and avoidable problems.
The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.
> require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue.
The conclusion in the final sentence only makes sense if you use “interact” in an incorrect way describing the Copenhagen interpretation of ethics, because the original description is only correct if you include observation as an interaction. By the time you have noted a thing is “high-negativity”, you have observed it and acquired responsibility for it's continuation under the Copenhagen interpretation; you cannot avoid that by choosing not to interact once you have observed it.
“There exists an ethical framework—not the Copenhagen interpretation —to which some minority of the population adheres in which trying and failing to a correct a problem incurs retroactive blame for the existence of the problem but seeing it and just saying ‘sucks, but not my problem’ does not,“ is probably true, but not very relevant.
It's logical for Google to avoid involvement with porn, and to be seen doing so, because even though porn is popular involvement with it is nevertheless politically unpopular, and Google’s business interest is in not making itself more attractive as a political punching bag. The popularity of Copenhagen ethics (or their distorted cousins) don't really play into it, just self interest.