Some of the reasoning:
>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.
Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.
One example would be if Imagen draws a group of mostly white people when you say "draw a group of people". This doesn't reflect actual reality. Another would be if Imagen draws a group of men when you say "draw a group of doctors".
In these cases where iconographic reality differs from actual reality, hand-tuning could be used to bring it closer to the real world, not just the world as we might wish it to be!
I agree there's a problem here. But I'd state it more as "new technologies are being held to a vastly higher standard than existing ones." Imagine TV studios issuing a moratorium on any new shows that made being white (or rich) seem more normal than it was! The public might rightly expect studios to turn the dials away from the blatant biases of the past, but even if this would be beneficial the progressive and activist public is generations away from expecting a TV studio to not release shows until they're confirmed to be bias-free.
That said, Google's decision to not publish is probably less about the inequities in AI's representation of reality and more about the AI sometimes spitting out drawings that are offensive in the US, like racist caricatures.