Some of the reasoning:
>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.
Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.
Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.
There’s no reason to believe their model training learns the same statistics as their input dataset even. If that’s not an explicit training goal then whatever happens happens. AI isn’t magic or more correct than people.