zlacker

[parent] [thread] 238 comments
1. daenz+(OP)[view] [source] 2022-05-23 21:20:13
>While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time.

Some of the reasoning:

>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.

Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

replies(28): >>ceeplu+F >>Mockap+f1 >>user39+r1 >>visarg+x1 >>xmonke+E1 >>tines+t2 >>nomel+D2 >>meetup+83 >>alphab+x4 >>Mizza+H4 >>joshcr+W4 >>riffra+X4 >>swayvi+j6 >>jimmyg+D6 >>babysh+k7 >>devind+R7 >>makeit+l8 >>jowday+o9 >>6gvONx+x9 >>bogwog+Ta >>tomp+ab >>seaman+8c >>ccbccc+uc >>planet+vd >>ThrowI+Ad >>tyrust+Lf >>sineno+Qq >>dclowd+JZ
2. ceeplu+F[view] [source] 2022-05-23 21:24:11
>>daenz+(OP)
The ironic part is that these "social and cultural biases" are purely from a Western, American lens. The people writing that paragraph are completely oblivious to the idea that there could be other cultures other than the Western American one. In attempting to prevent "encoding of social and cultural biases" they have encoded such biases themselves into their own research.
replies(3): >>kevinh+X >>tantal+c1 >>not2b+C1
◧◩
3. kevinh+X[view] [source] [discussion] 2022-05-23 21:25:48
>>ceeplu+F
What makes you think the authors are all American?
replies(1): >>umeshu+86
◧◩
4. tantal+c1[view] [source] [discussion] 2022-05-23 21:27:16
>>ceeplu+F
https://en.wikipedia.org/wiki/Moral_relativism
5. Mockap+f1[view] [source] 2022-05-23 21:27:31
>>daenz+(OP)
Transformers are parallelize-able, right? What’s stopping a large group of people from pooling their compute power together and working towards something like this? IIRC there were some crypto projects a while back that we’re trying to create something similar (golem?)
replies(3): >>visarg+m2 >>joshcr+M5 >>sineno+5B
6. user39+r1[view] [source] 2022-05-23 21:28:28
>>daenz+(OP)
Translation: we need to hand-tune this to not reflect reality but instead the world as we (Caucasian/Asian male American woke upper-middle class San Fransisco engineers) wish it to be.

Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.

replies(7): >>ceejay+p2 >>barred+z2 >>josho+v3 >>Ar-Cur+i4 >>JohnBo+o5 >>userbi+sa >>holmes+lb
7. visarg+x1[view] [source] 2022-05-23 21:28:47
>>daenz+(OP)
The big labs have become very sensitive with large model releases. It's too easy to make them generate bad PR, to the point of not releasing almost any of them. Flamingo was also a pretty great vison-language model that wasn't released, not even in a demo. PaLM is supposedly better than GPT-3 but closed off. It will probably take a year for open source models to appear.
replies(2): >>runner+M3 >>godels+F4
◧◩
8. not2b+C1[view] [source] [discussion] 2022-05-23 21:29:22
>>ceeplu+F
It seems you've got it backwards: "tendency for images portraying different professions to align with Western gender stereotypes" means that they are calling out their own work precisely because it is skewed in the direction of Western American biases.
replies(3): >>ceeplu+72 >>young_+L7 >>Ludwig+1a
9. xmonke+E1[view] [source] 2022-05-23 21:29:35
>>daenz+(OP)
I was hoping your conclusion wasn't going to be this as I was reading that quote. But, sadly, this is HN.
◧◩◪
10. ceeplu+72[view] [source] [discussion] 2022-05-23 21:31:32
>>not2b+C1
Yes, the idea is that just because it doesn't align to Western ideals of what seems unbiased doesn't mean that the same is necessarily true for other cultures, and by failing to release the model because it doesn't conform to Western, left wing cultural expectations, the authors are ignoring the diversity of cultures that exist globally.
replies(1): >>howint+P6
◧◩
11. visarg+m2[view] [source] [discussion] 2022-05-23 21:33:00
>>Mockap+f1
There are the Eleuther.ai and BigScience projects working on public foundation models. They have a few releases already and currently training GPT-3 sized models.
◧◩
12. ceejay+p2[view] [source] [discussion] 2022-05-23 21:33:21
>>user39+r1
"Reality" as defined by the available training set isn't necessarily reality.

For example, Google's image search results pre-tweaking had some interesting thoughts on what constitutes a professional hairstyle, and that searches for "men" and "women" should only return light-skinned people: https://www.theguardian.com/technology/2016/apr/08/does-goog...

Does that reflect reality? No.

(I suspect there are also mostly unstated but very real concerns about these being used as child pornography, revenge porn, "show my ex brutally murdered" etc. generators.)

replies(4): >>ceeplu+U2 >>rvnx+c4 >>userbi+db >>ChadNa+wd
13. tines+t2[view] [source] 2022-05-23 21:33:39
>>daenz+(OP)
This raises some really interesting questions.

We certainly don't want to perpetuate harmful stereotypes. But is it a flaw that the model encodes the world as it really is, statistically, rather than as we would like it to be? By this I mean that there are more light-skinned people in the west than dark, and there are more women nurses than men, which is reflected in the model's training data. If the model only generates images of female nurses, is that a problem to fix, or a correct assessment of the data?

If some particular demographic shows up in 51% of the data but 100% of the model's output shows that one demographic, that does seem like a statistics problem that the model could correct by just picking less likely "next token" predictions.

Also, is it wrong to have localized models? For example, should a model for use in Japan conform to the demographics of Japan, or to that of the world?

replies(10): >>karpie+b4 >>daenz+65 >>godels+p5 >>jonny_+X5 >>Imnimo+0a >>SnowHi+Bc >>skybri+jd >>ben_w+3f >>pshc+5k >>webmav+AM7
◧◩
14. barred+z2[view] [source] [discussion] 2022-05-23 21:34:22
>>user39+r1
I know you're anon trolling, but the authors' names are:

Chitwan Saharia, William Chan, Saurabh Saxena†, Lala Li†, Jay Whang†, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho†, David Fleet†, Mohammad Norouzi

replies(2): >>pid-1+A8 >>hda2+pN
15. nomel+D2[view] [source] 2022-05-23 21:35:10
>>daenz+(OP)
If you tell it to generate an image of someone eating Koshihikari rice, will it be biased if they're Japanese? Should the skin color, clothing, setting, etc be made completely random, so that it's unbiased? What if you made it more specific, like "edo period drawing of a man"? Should the person draw be of a random skin color? What about "picture of a viking"? Is it biased if they're white?

At what point is statistical significance considered ok and unbiased?

replies(1): >>pxmpxm+Y4
◧◩◪
16. ceeplu+U2[view] [source] [discussion] 2022-05-23 21:36:17
>>ceejay+p2
The reality is that hair styles on the left side of the image in the article are widely considered unprofessional in today's workplaces. That may seem egregiously wrong to you, but it is a truth of American and European society today. Should it be Google's job to rewrite reality?
replies(3): >>ceejay+k3 >>rcMgD2+t4 >>colinm+S7
17. meetup+83[view] [source] 2022-05-23 21:37:32
>>daenz+(OP)
One of these days we're going to need to give these models a mortgage and some mouths to feed and make it clear to them that if they keep on developing biases from their training data everyone will shun them and their family will go hungry and they won't be able to make their payments and they'll just generally have a really bad time.

After that we'll make them sit through Legal's approved D&I video series, then it's off to the races.

replies(3): >>pxmpxm+w5 >>aaaaaa+R5 >>sineno+HA
◧◩◪◨
18. ceejay+k3[view] [source] [discussion] 2022-05-23 21:38:31
>>ceeplu+U2
The "unprofessional" results are almost exclusively black women; the "professional" ones are almost exclusively white or light skinned.

Unless you think white women are immune to unprofessional hairstyles, and black women incapable of them, there's a race problem illustrated here even if you think the hairstyles illustrated are fairly categorized.

replies(1): >>rvnx+r5
◧◩
19. josho+v3[view] [source] [discussion] 2022-05-23 21:39:05
>>user39+r1
Translation: AI has the potential to transform society. When we release this model to the public it will be used in ways we haven’t anticipated. We know the model has bias and we need more time to consider releasing this to the public out of concerns that this transformative technology further perpetuate mistakes that we’ve made in our recent past.
replies(1): >>curiou+h4
◧◩
20. runner+M3[view] [source] [discussion] 2022-05-23 21:40:44
>>visarg+x1
The largest models which generate the headline benchmarks are never released after any number of years, it seems.

Very difficult to replicate results.

◧◩
21. karpie+b4[view] [source] [discussion] 2022-05-23 21:43:40
>>tines+t2
It depends on whether you'd like the model to learn casual or correlative relationships.

If you want the model to understand what a "nurse" actually is, then it shouldn't be associated with female.

If you want the model to understand how the word "nurse" is usually used, without regard for what a "nurse" actually is, then associating it with female is fine.

The issue with a correlative model is that it can easily be self-reinforcing.

replies(5): >>jdashg+Y5 >>Ludwig+j8 >>bufbup+P9 >>sineno+Zw >>drdeca+or1
◧◩◪
22. rvnx+c4[view] [source] [discussion] 2022-05-23 21:43:41
>>ceejay+p2
If your query was about hairstyle, why do you even look or care about the skin color ?

Nowhere there is any precision for a preferred skin color in the query of th user.

So it sorts and gives the most average examples based on the examples that were found on the internet.

Essentially answering the query "SELECT * FROM `non-professional hairstyles` ORDER BY score DESC LIMIT 10".

It's like if you search on Google "best place for wedding night".

You may get 3 places out of 10 in Santorini, Greece.

Yes you could have an human remove these biases because you feel that Sri Lanka is the best place for a wedding, but what if there is a consensus that Santorini is really the most appraised in the forums or websites that were crawled by Google ?

replies(3): >>ceejay+n4 >>jayd16+i6 >>colinm+H7
◧◩◪
23. curiou+h4[view] [source] [discussion] 2022-05-23 21:44:13
>>josho+v3
> it will be used in ways we haven’t anticipated

Oh yeah, as a woman who grew up in a Third World country, how an AI model generates images would have deeply affected my daily struggles! /s

It's kinda insulting that they think that this would be insulting. Like "Oh no I asked the model to draw a doctor and it drew a male doctor, I guess there's no point in me pursuing medical studies" ...

replies(4): >>boppo1+T6 >>pxmpxm+67 >>colinm+o8 >>renewi+He
◧◩
24. Ar-Cur+i4[view] [source] [discussion] 2022-05-23 21:44:16
>>user39+r1
Except "reality" in this case is just their biased training set. E.g. There's more non-white doctors and nurses in the world than white ones, yet their model would likely show an image of white person when you type in "doctor".
replies(1): >>umeshu+x7
◧◩◪◨
25. ceejay+n4[view] [source] [discussion] 2022-05-23 21:44:50
>>rvnx+c4
> The algorithm is just ranking the top "non-professional hairstyle" in the most neutral way in its database

You're telling me those are all the most non-professional hairstyles available? That this is a reasonable assessment? That fairly standard, well-kept, work-appropriate curly black hair is roughly equivalent to the pink-haired, three-foot-wide hairstyle that's one of the only white people in the "unprofessional" search?

Each and everyone of them is less workplace appropriate than, say, http://www.7thavenuecostumes.com/pictures/750x950/P_CC_70594... ?

replies(1): >>rvnx+n6
◧◩◪◨
26. rcMgD2+t4[view] [source] [discussion] 2022-05-23 21:45:13
>>ceeplu+U2
In any case, Google will be writing their reality. Who picked the image sample for the ML to run on, if not Google? What's the problem with writing it again, then? They know their biases and want to act on it.

It's like blaming a friend for trying to phrase things nicely, and telling them to speak headlong with zero concern for others instead. Unless you believe anyone trying to do good is being hypocrite…

I, for one, like civility.

27. alphab+x4[view] [source] 2022-05-23 21:45:29
>>daenz+(OP)
There is a contingent of AI activists who spend a ton of time on Twitter that would beat Google like a drum with help from the media if they put out something they deemed racist or biased.
◧◩
28. godels+F4[view] [source] [discussion] 2022-05-23 21:45:58
>>visarg+x1
That's because we're still bad about long-tailed data and that people outside the research don't realize that we're first prioritizing realistic images before we deal with long-tailed data (which is going to be the more generic form of bias). To be honest, it is a bit silly to focus on long-tailed data when results aren't great. That's why we see the constant pattern of getting good on a dataset and then focusing on the bias in that dataset.

I mean a good example of this is the Pulse[0][1] paper. You may remember it as the white Obama. This became a huge debate and it was pretty easily shown that the largest factor was the dataset bias. This outrage did lead to fixing FFHQ but it also sparked a huge debate with LeCun (data centric bias) and Timnit (model centric bias) at the center. Though Pulse is still remembered for this bias, not for how they responded to it. I should also note that there is human bias in this case as we have a priori knowledge of what the upsampled image should look like (humans are pretty good at this when the small image is already recognizable but this is a difficult metric to mathematically calculate).

It is fairly easy to find adversarial examples, where generative models produce biased results. It is FAR harder to fix these. Since this is known by the community but not by the public (and some community members focus on finding these holes but not fixing them) it creates outrage. Probably best for them to limit their release.

[0] https://arxiv.org/abs/2003.03808

[1] https://cdn.vox-cdn.com/thumbor/MXX-mZqWLQZW8Fdx1ilcFEHR8Wk=...

replies(2): >>alexb_+K11 >>visarg+kt2
29. Mizza+H4[view] [source] 2022-05-23 21:46:05
>>daenz+(OP)
So glad the company that spies on me and reads my email for profit is protecting me from pictures that don't look like TV commercials.
replies(1): >>astran+yc
30. joshcr+W4[view] [source] 2022-05-23 21:48:04
>>daenz+(OP)
They're withholding the API, code, and trained data because they don't want it to affect their corporate image. The good thing is they released their paper which will allow easy reproduction.

T5-XXL looks on par with CLIP so we may not see an open source version of T5 for a bit (LAION is working on reproducing CLIP), but this is all progress.

replies(1): >>minima+sf
31. riffra+X4[view] [source] 2022-05-23 21:48:08
>>daenz+(OP)
This seems bullshit to me, considering Google translate and google images encode the same biases and stereotypes, and are widely available.
replies(2): >>nomel+47 >>seaman+wc
◧◩
32. pxmpxm+Y4[view] [source] [discussion] 2022-05-23 21:48:10
>>nomel+D2
>At what point is statistical significance considered ok and unbiased?

Presumably when you're significantly predictive of the preferred dogma, rather than reality. There's no small bit of irony in machines inadvertently creating cognitive dissonance of this sort; second order reality check.

I'm fairly sure this never actually played out well in history (bourgeois pseudoscience, deutsche physik etc), so expect some Chinese research bureau to forge ahead in this particular direction.

◧◩
33. daenz+65[view] [source] [discussion] 2022-05-23 21:48:57
>>tines+t2
I think the statistics/representation problem is a big problem on its own, but IMO the bigger problem here is democratizing access to human-like creativity. Currently, the ability to create compelling art is only held by those with some artistic talent. With a tool like this, that restriction is gone. Everyone, no matter how uncreative, untalented, or uncommitted, can create compelling visuals, provided they can use language to describe what they want to see.

So even if we managed to create a perfect model of representation and inclusion, people could still use it to generate extremely offensive images with little effort. I think people see that as profoundly dangerous. Restricting the ability to be creative seems to be a new frontier of censorship.

replies(2): >>concor+I8 >>adrian+2b
◧◩
34. JohnBo+o5[view] [source] [discussion] 2022-05-23 21:50:15
>>user39+r1

    Translation: we need to hand-tune this to not reflect reality
Is it reflecting reality, though?

Seems to me that (as with any ML stuff, right?) it's reflecting the training corpus.

Futhermore, is it this thing's job to reflect reality?

    the world as we (Caucasian/Asian male American woke 
    upper-middle class San Fransisco engineers) wish it to be
Snarky answer: Ah, yes, let's make sure that things like "A giant cobra snake on a farm. The snake is made out of corn" reflect reality.

Heartfelt answer: Yes, there is some of that wishful thinking or editorializing. I don't consider it to be erasing or denying reality. This is a tool that synthesizes unreality. I don't think that such a tool should, say, refuse to synthesize an image of a female POTUS because one hasn't existed yet. This is art, not a reporting tool... and keep in mind that art not only imitates life but also influences it.

replies(1): >>nomel+W7
◧◩
35. godels+p5[view] [source] [discussion] 2022-05-23 21:50:24
>>tines+t2
> But is it a flaw that the model encodes the world as it really is

I want to be clear here, bias can be introduced at many different points. There's dataset bias, model bias, and training bias. Every model is biased. Every dataset is biased.

Yes, the real world is also biased. But I want to make sure that there are ways to resolve this issue. It is terribly difficult, especially in a DL framework (even more so in a generative model), but it is possible to significantly reduce the real world bias.

replies(1): >>tines+h8
◧◩◪◨⬒
36. rvnx+r5[view] [source] [discussion] 2022-05-23 21:50:32
>>ceejay+k3
If you type as a prompt "most beautiful woman in the world", you get a brown-skinned brown-haired woman with hazel eyes.

What should be the right answer then ?

You put a blonde, you offend the brown haired.

You put blue eyes, you offend the brown eyes.

etc.

replies(1): >>ceejay+d6
◧◩
37. pxmpxm+w5[view] [source] [discussion] 2022-05-23 21:51:18
>>meetup+83
Underrated comment.
◧◩
38. joshcr+M5[view] [source] [discussion] 2022-05-23 21:52:44
>>Mockap+f1
There are people working on reproducing the models, see here for Dall-E 2 for example: https://github.com/lucidrains/DALLE2-pytorch

It's often not worth it to decentralize the computation of the trained model though but it's not hard to get donated cycles and groups are working on it. Don't fret because Google isn't releasing the API/code. They released the paper and that's all you need.

◧◩
39. aaaaaa+R5[view] [source] [discussion] 2022-05-23 21:53:13
>>meetup+83
Reinforcement learning?
◧◩
40. jonny_+X5[view] [source] [discussion] 2022-05-23 21:53:37
>>tines+t2
> But is it a flaw that the model encodes the world as it really is

Does a bias towards lighter skin represent reality? I was under the impression that Caucasians are a minority globally.

I read the disclaimer as "the model does NOT represent reality".

replies(4): >>tines+O7 >>fnordp+b8 >>ma2rte+kb >>nearbu+HR
◧◩◪
41. jdashg+Y5[view] [source] [discussion] 2022-05-23 21:53:42
>>karpie+b4
Additionally, if you optimize for most-likely-as-best, you will end up with the stereotypical result 100% of the time, instead of in proportional frequency to the statistics.

Put another way, when we ask for an output optimized for "nursiness", is that not a request for some ur stereotypical nurse?

replies(2): >>jvalen+D7 >>ar_lan+Ab
◧◩◪
42. umeshu+86[view] [source] [discussion] 2022-05-23 21:54:53
>>kevinh+X
The authors are listed on the page and a quick look at LinkedIn seem to be mostly Canadian.
◧◩◪◨⬒⬓
43. ceejay+d6[view] [source] [discussion] 2022-05-23 21:55:16
>>rvnx+r5
That's an unanswerable question. Perhaps the answer is "don't".

Siri takes this approach for a wide range of queries.

replies(2): >>nomel+M9 >>rvnx+Wb
◧◩◪◨
44. jayd16+i6[view] [source] [discussion] 2022-05-23 21:55:29
>>rvnx+c4
The results are not inherently neutral because the database is from non-neutral input.

It's a simple case of sample bias.

45. swayvi+j6[view] [source] 2022-05-23 21:55:33
>>daenz+(OP)
it isn't woke enough. Lol.
replies(1): >>ccbccc+bd
◧◩◪◨⬒
46. rvnx+n6[view] [source] [discussion] 2022-05-23 21:55:44
>>ceejay+n4
I'm saying that the dataset needs to be expanded to cover the most examples possible.

Work a lot on adding even more examples, in order to make the algorithms as close as possible to the "average reality".

At some point we may even ultimately reach the state that the robots even collect intelligence directly in the real world, and not on the internet (even closer to reality).

Censoring results sounds the best recipe for a dystopian world where only one view is right.

47. jimmyg+D6[view] [source] 2022-05-23 21:57:16
>>daenz+(OP)
Are "Western gender stereotypes" significantly different than non-Western gender stereotypes? I can't tell if that means it counts a chubby stubble-covered man with a lip piercing, greasy and dyed long hair, wearing an overly frilly dress as a DnD player/metal-head or as a "woman" or not (yes I know I'm being uncharitable and potentially "bigoted" but if you saw my Tinder/Bumble suggestions and friend groups you'd know I'm not exaggerating for either category). I really can't tell what stereotypes are referred to here.
◧◩◪◨
48. howint+P6[view] [source] [discussion] 2022-05-23 21:58:16
>>ceeplu+72
No, it's coming from a perspective of moral realism. It's an objective moral truth that racial and ethnic biases are bad. Yet most cultures around the world are racist to at least some degree, and to they extent that the cultures do, they are bad.

The argument you're making, paraphrased, is that the idea that biases are bad is itself situated in particular cultural norms. While that is true to some degree, from a moral realist perspective we can still objectively judge those cultural norms to be better or worse than alternatives.

replies(2): >>tomp+gd >>ceeplu+2k
◧◩◪◨
49. boppo1+T6[view] [source] [discussion] 2022-05-23 21:58:27
>>curiou+h4
I don't think the concern over offense is actually about you. There's a metagame here which is that if it could potentially offend you (third-world-originated-woman), then there's a brand-image liability for the company. I don't think they care about you, I think they care about not being hit on as "the company that algorithmically identifies black people as gorillas".
◧◩
50. nomel+47[view] [source] [discussion] 2022-05-23 21:59:26
>>riffra+X4
Aren't those old systems?
replies(1): >>Semant+ni2
◧◩◪◨
51. pxmpxm+67[view] [source] [discussion] 2022-05-23 21:59:32
>>curiou+h4
Postmodernism is what postmodernism does.
replies(1): >>contin+7a
52. babysh+k7[view] [source] 2022-05-23 22:00:42
>>daenz+(OP)
Indeed. If a project has shortcomings, why not just acknowledge the shortcomings and plan to improve on them in a future release? Is it anticipated that "engineer" being rendered as a man by the model is going to be an actively dangerous thing to have out in the world?
replies(1): >>makeit+x8
◧◩◪
53. umeshu+x7[view] [source] [discussion] 2022-05-23 22:02:06
>>Ar-Cur+i4
Alternately, there are more females nurses in the world than male nurses, and their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.
replies(3): >>contin+ka >>astran+Hb >>webmav+za8
◧◩◪◨
54. jvalen+D7[view] [source] [discussion] 2022-05-23 22:02:52
>>jdashg+Y5
You could simply encode a score for how well the output matches the input. If 25% of trees in summer are brown, perhaps the output should also have 25% brown. The model scores itself on frequencies as well as correctness.
replies(2): >>spywar+X9 >>astran+za
◧◩◪◨
55. colinm+H7[view] [source] [discussion] 2022-05-23 22:03:09
>>rvnx+c4
> If your query was about hairstyle, why do you even look at the skin color ?

You know that race has a large effect on hair right?

replies(1): >>daenz+O8
◧◩◪
56. young_+L7[view] [source] [discussion] 2022-05-23 22:03:22
>>not2b+C1
The very act of mentioning "western gender stereotypes" starts from a biased position.

Why couldn't they be "northern gender stereotypes"? Is the world best explained as a division of west/east instead of north/south? The northern hemisphere has much more population than the south, and almost all rich countries are in the northern hemisphere. And precisely it's these rich countries pushing the concept of gender stereotypes. In poor countries, nobody cares about these "gender stereotypes".

Actually, the lines dividing the earth into north and south, east and west hemispheres are arbitrary, so maybe they shouldn't mention the word "western" to avoid the propagation of stereotypes about earth regions.

Or why couldn't they be western age stereotypes? Why are there no kids or very old people depicted as nurses?

Why couldn't they be western body shape stereotypes? Why are there so few obese people in the images? Why are there no obese people depicted as athletes?

Are all of these really stereotypes or just natural consequences of natural differences?

replies(1): >>joshcr+5a
◧◩◪
57. tines+O7[view] [source] [discussion] 2022-05-23 22:03:51
>>jonny_+X5
Well first, I didn't say caucasian; light-skinned includes Spanish people and many others that caucasian excludes, and that's why I said the former. Also, they are a minority globally, but the GP mentioned "Western stereotypes", and they're a majority in the West, so that's why I said "in the west" when I said that there are more light-skinned people.
58. devind+R7[view] [source] 2022-05-23 22:04:15
>>daenz+(OP)
Good lord. Withheld? They've published their research, they just aren't making the model available immediately, waiting until they can re-implement it so that you don't get racial slurs popping up when you ask for a cup of "black coffee."

>While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes

Tossing that stuff when it comes up in a research environment is one thing, but Google clearly wants to implement this as a product, used all over the world by a huge range of people. If the dataset has problems, and why wouldn't it, it is perfectly rational to want to wait and re-implement it with a better one. DALL-E 2 was trained on a curated dataset so it couldn't generate sex or gore. Others are sanitizing their inputs too and have done for a long time. It is the only thing that makes sense for a company looking to commercialize a research project.

This has nothing to do with "inability to cope" and the implied woke mob yelling about some minor flaw. It's about building a tool that doesn't bake in serious and avoidable problems.

replies(1): >>concor+I9
◧◩◪◨
59. colinm+S7[view] [source] [discussion] 2022-05-23 22:04:20
>>ceeplu+U2
Only black people have unprofessional hair and only white people have professional hair is not reality.
◧◩◪
60. nomel+W7[view] [source] [discussion] 2022-05-23 22:04:30
>>JohnBo+o5
> Snarky answer: Ah, yes, let's make sure that things like "A giant cobra snake on a farm. The snake is made out of corn" reflect reality.

If it didn't reflect reality, you wouldn't be impressed by the image of the snake made of corn.

replies(1): >>JohnBo+uG
◧◩◪
61. fnordp+b8[view] [source] [discussion] 2022-05-23 22:05:58
>>jonny_+X5
Worse these models are fed from media sourced in a society that tells a different story of reality than reality actually has. How can they be accurate? They just reflect the biases of our various medias and arts. But I don’t think there’s any meaningful resolution in the present other than acknowledging this and trying to release more representative models as you can.
◧◩◪
62. tines+h8[view] [source] [discussion] 2022-05-23 22:06:18
>>godels+p5
> Every dataset is biased.

Sure, I wasn't questioning the bias of the data, I was talking about the bias of the real world and whether we want the model to be "unbiased about bias" i.e. metabiased or not.

Showing nurses equally as men and women is not biased, but it's metabiased, because the real world is biased. Whether metabias is right or not is more interesting than the question of whether bias is wrong because it's more subtle.

Disclaimer: I'm a fucking idiot and I have no idea what I'm talking about so take with a grain of salt.

replies(1): >>john_y+Mb
◧◩◪
63. Ludwig+j8[view] [source] [discussion] 2022-05-23 22:06:26
>>karpie+b4
> If you want the model to understand how the word "nurse" is usually used, without regard for what a "nurse" actually is, then associating it with female is fine.

That’s a distinction without a difference. Meaning is use.

replies(2): >>tines+i9 >>mdp202+Qa
64. makeit+l8[view] [source] 2022-05-23 22:06:42
>>daenz+(OP)
> Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

Genuinely, isn't it a prime example of the people actually stopping to think if they should, instead of being preoccupied with whether or not they could ?

◧◩◪◨
65. colinm+o8[view] [source] [discussion] 2022-05-23 22:07:09
>>curiou+h4
Yes actually, subconscious bias due to historical prejudice does have a large effect on society. Obviously there are things with much larger effects, that doesn't mean that this doesn't exist.

> Oh no I asked the model to draw a doctor and it drew a male doctor, I guess there's no point in me pursuing medical studies

If you don't think this is a real thing that happens to children you're not thinking especially hard. It doesn't have to be common to be real.

replies(3): >>curiou+h9 >>paisaw+Bg >>astran+ey
◧◩
66. makeit+x8[view] [source] [discussion] 2022-05-23 22:07:43
>>babysh+k7
"what could go wrong anyway?"
◧◩◪
67. pid-1+A8[view] [source] [discussion] 2022-05-23 22:08:14
>>barred+z2
Absolutely not related to the whole discussion, but what do "†" stands for?
replies(2): >>dntrkv+Db >>joshcr+gc
◧◩◪
68. concor+I8[view] [source] [discussion] 2022-05-23 22:09:05
>>daenz+65
I can't quite tell if you're being sarcastic about people being able to make things other people would find offensive being a problem. Are you missing an /s?
◧◩◪◨⬒
69. daenz+O8[view] [source] [discussion] 2022-05-23 22:09:31
>>colinm+H7
I'd be careful where you're going with that. You might make a point that is the opposite of what you intended.
◧◩◪◨⬒
70. curiou+h9[view] [source] [discussion] 2022-05-23 22:11:10
>>colinm+o8
> If you don't think this is a real thing that happens to children you're not thinking especially hard

I believe that's where parenting comes in. Maybe I'm too cynical but I think that the parents' job is to undo all of the harm done by society and instill in their children the "correct" values.

replies(3): >>colinm+ya >>holmes+kf >>cgreal+8j
◧◩◪◨
71. tines+i9[view] [source] [discussion] 2022-05-23 22:11:16
>>Ludwig+j8
Not really; the gender of a nurse is accidental, other properties are essential.
replies(3): >>codeth+cc >>paisaw+df >>Ludwig+Ch
72. jowday+o9[view] [source] 2022-05-23 22:11:45
>>daenz+(OP)
Much like OpenAIs marketing speak about withholding their models for safety, this is just a progressive-sounding cover story for them not wanting to essentially give away a model they spent thousands of man hours and tens of millions of dollars worth of compute training.
73. 6gvONx+x9[view] [source] 2022-05-23 22:12:39
>>daenz+(OP)
It’s wild to me that the HN consensus is so often that 1) discourse around the internet is terrible, it’s full of spam and crap, and the internet is an awful unrepresentative snapshot of human existence, and 2) the biases of general-internet-training-data are fine in ML models because it just reflects real life.
replies(3): >>astran+hc >>nullc+Nd >>colord+Vf
◧◩
74. concor+I9[view] [source] [discussion] 2022-05-23 22:13:39
>>devind+R7
I wonder why they don't like the idea of autogenerated porn... They're already putting most artists out of a job, why not put porn stars out of a job too?
replies(3): >>notaha+od >>renewi+pd >>colinm+si
◧◩◪◨⬒⬓⬔
75. nomel+M9[view] [source] [discussion] 2022-05-23 22:14:09
>>ceejay+d6
How do you pick what should and shouldn't be restricted? Is there some "offense threshold"? I suspect all queries relating to religion, ethnicity, sexuality, and gender will need to be restricted, which almost certainly means you probably can't include humans at all, other than ones artificially inserted with mathematically proven random attributes. Maybe that's why none are in this demo.
replies(2): >>daenz+Uc >>astran+7i
◧◩◪
76. bufbup+P9[view] [source] [discussion] 2022-05-23 22:14:32
>>karpie+b4
At the end of a day, if you ask for a nurse, should the model output a male or female by default? If the input text lacks context/nuance, then the model must have some bias to infer the user's intent. This holds true for any image it generates; not just the politically sensitive ones. For example, if I ask for a picture of a person, and don't get one with pink hair, is that a shortcoming of the model?

I'd say that bias is only an issue if it's unable to respond to additional nuance in the input text. For example, if I ask for a "male nurse" it should be able to generate the less likely combination. Same with other races, hair colors, etc... Trying to generate a model that's "free of correlative relationships" is impossible because the model would never have the infinitely pedantic input text to describe the exact output image.

replies(5): >>karpie+6b >>slg+Rb >>sangno+Yg >>pshc+7l >>webmav+X28
◧◩◪◨⬒
77. spywar+X9[view] [source] [discussion] 2022-05-23 22:15:06
>>jvalen+D7
Suppose 10% of people have green skin. And 90% of those people have broccoli hair. White people don't have broccoli hair.

What percent of people should be rendered as white people with broccoli hair? What if you request green people. Or broccoli haired people. Or white broccoli haired people? Or broccoli haired nazis?

It gets hard with these conditional probabilities

◧◩
78. Imnimo+0a[view] [source] [discussion] 2022-05-23 22:15:26
>>tines+t2
>If some particular demographic shows up in 51% of the data but 100% of the model's output shows that one demographic, that does seem like a statistics problem that the model could correct by just picking less likely "next token" predictions.

Yeah, but you get that same effect on every axis, not just the one you're trying to correct. You might get male nurses, but they have green hair and six fingers, because you're sampling from the tail on all axes.

replies(1): >>tines+Ja
◧◩◪
79. Ludwig+1a[view] [source] [discussion] 2022-05-23 22:15:32
>>not2b+C1
You think there are homogenous gender stereotypes across the whole Western world? You say “woman” and someone will imagine a SAHM, while another person will imagine a you-go-girl CEO with tattoos and pink hair.

What they mean is people who think not like them.

◧◩◪◨
80. joshcr+5a[view] [source] [discussion] 2022-05-23 22:15:56
>>young_+L7
The bulk of the trained data is from western technology, images, books, television, movies, photography, media. That's where the very real and recognized biases come from. They're the result of a gap in data nothing more.

Look at how DALL-E 2 produces little bears rather than bear sized bears. Because its data doesn't have a lot of context for how large bears are. So you wind up having to say "very large bear" to DALL-E 2.

Are DALL-E 2 bears just a "natural consequence of natural differences"? Or is the model not reflective of reality?

replies(1): >>Shaani+1h
◧◩◪◨⬒
81. contin+7a[view] [source] [discussion] 2022-05-23 22:16:04
>>pxmpxm+67
Love it. Added to https://github.com/globalcitizen/taoup
replies(1): >>pxmpxm+nh
◧◩◪◨
82. contin+ka[view] [source] [discussion] 2022-05-23 22:16:57
>>umeshu+x7
@Google Brain Toronto Team: See what you get when you generate nurses with ncurses.
◧◩
83. userbi+sa[view] [source] [discussion] 2022-05-23 22:17:57
>>user39+r1
Indeed. As the saying goes, we are truly living in a post-truth world.
◧◩◪◨⬒⬓
84. colinm+ya[view] [source] [discussion] 2022-05-23 22:18:25
>>curiou+h9
I'd say you're right. Unfortunately many people are raised by bad parents. Should these researchers accept that their work may perpetuate stereotypes that harm those that most need help? I can see why they wouldn't want that.
◧◩◪◨⬒
85. astran+za[view] [source] [discussion] 2022-05-23 22:18:25
>>jvalen+D7
The only reason these models work is that we don’t interfere with them like that.

Your description is closer to how the open source CLIP+GAN models did it - if you ask for “tree” it starts growing the picture towards treeness until it’s all averagely tree-y rather than being “a picture of a single tree”.

It would be nice if asking for N samples got a diversity of traits you didn’t explicitly ask for. OpenAI seems to solve this by not letting you see it generate humans at all…

◧◩◪
86. tines+Ja[view] [source] [discussion] 2022-05-23 22:19:41
>>Imnimo+0a
Yeah, good point, it's not as simple as I thought.
◧◩◪◨
87. mdp202+Qa[view] [source] [discussion] 2022-05-23 22:20:38
>>Ludwig+j8
Very certainly not, since use is individual and thus a function of competence. So, adherence to meaning depends on the user. Conflict resolution?

And anyway - contextually -, the representational natures of "use" (instances) and that of "meaning" (definition) are completely different.

replies(2): >>layer8+Md >>Ludwig+Rh
88. bogwog+Ta[view] [source] 2022-05-23 22:21:07
>>daenz+(OP)
I wouldn't describe this situation as "sad". Basically, this decision is based on a belief that tech companies should decide what our society should look like. I don't know what emotion that conjures up for you, but "sadness" isn't it for me.
◧◩◪
89. adrian+2b[view] [source] [discussion] 2022-05-23 22:22:20
>>daenz+65
> So even if we managed to create a perfect model of representation and inclusion, people could still use it to generate extremely offensive images with little effort. I think people see that as profoundly dangerous.

Do they see it as dangerous? Or just offensive?

I can understand why people wouldn’t want a tool they have created to be used to generate disturbing, offensive or disgusting imagery. But I don’t really see how doing that would be dangerous.

In fact, I wonder if this sort of technology could reduce the harm caused by people with an interest in disgusting images, because no one needs to be harmed for a realistic image to be created. I am creeping myself out with this line of thinking, but it seems like one potential beneficial - albeit disturbing - outcome.

> Restricting the ability to be creative seems to be a new frontier of censorship.

I agree this is a new frontier, but it’s not censorship to withhold your own work. I also don’t really think this involves much creativity. I suppose coming up with prompts involves a modicum of creativity, but the real creator here is the model, it seems to me.

replies(3): >>tines+Vc >>gknoy+7d >>webmav+a88
◧◩◪◨
90. karpie+6b[view] [source] [discussion] 2022-05-23 22:22:40
>>bufbup+P9
> At the end of a day, if you ask for a nurse, should the model output a male or female by default?

Randomly pick one.

> Trying to generate a model that's "free of correlative relationships" is impossible because the model would never have the infinitely pedantic input text to describe the exact output image.

Sure, and you can never make a medical procedure 100% safe. Doesn't mean that you don't try to make them safer. You can trim the obvious low hanging fruit though.

replies(3): >>calvin+tc >>pxmpxm+md >>nmfish+WN5
91. tomp+ab[view] [source] 2022-05-23 22:22:49
>>daenz+(OP)
> a tendency for images portraying different professions to align with Western gender stereotypes

There are two possible ways of interpreting interpreting "gender stereotypes in professions".

biased or correct

https://www.abc.net.au/news/2018-05-21/the-most-gendered-top...

https://www.statista.com/statistics/1019841/female-physician...

◧◩◪
92. userbi+db[view] [source] [discussion] 2022-05-23 22:23:00
>>ceejay+p2
unstated but very real concerns

I say let people generate their own reality. The sooner the masses realise that ceci n'est pas une pipe , the less likely they are to be swayed by the growing un-reality created by companies like Google.

◧◩◪
93. ma2rte+kb[view] [source] [discussion] 2022-05-23 22:23:59
>>jonny_+X5
Caucasians are overrepresented in internet pictures.
replies(2): >>pxmpxm+Te >>jonny_+zf
◧◩
94. holmes+lb[view] [source] [discussion] 2022-05-23 22:24:01
>>user39+r1
"As we wish it to be" is not totally true, because there are some places where humanity's iconographic reality (which Imagen trains on) differs significantly from actual reality.

One example would be if Imagen draws a group of mostly white people when you say "draw a group of people". This doesn't reflect actual reality. Another would be if Imagen draws a group of men when you say "draw a group of doctors".

In these cases where iconographic reality differs from actual reality, hand-tuning could be used to bring it closer to the real world, not just the world as we might wish it to be!

I agree there's a problem here. But I'd state it more as "new technologies are being held to a vastly higher standard than existing ones." Imagine TV studios issuing a moratorium on any new shows that made being white (or rich) seem more normal than it was! The public might rightly expect studios to turn the dials away from the blatant biases of the past, but even if this would be beneficial the progressive and activist public is generations away from expecting a TV studio to not release shows until they're confirmed to be bias-free.

That said, Google's decision to not publish is probably less about the inequities in AI's representation of reality and more about the AI sometimes spitting out drawings that are offensive in the US, like racist caricatures.

◧◩◪◨
95. ar_lan+Ab[view] [source] [discussion] 2022-05-23 22:25:17
>>jdashg+Y5
You could stipulate that it roll a die based on percentage results - if 70% of Americans are "white", then 70% of the time show a white person - 13% of the time the result should be black, etc.

That's excessively simplified but wouldn't this drop the stereotype and better reflect reality?

replies(2): >>ghayes+Jc >>SnowHi+1d
◧◩◪◨
96. dntrkv+Db[view] [source] [discussion] 2022-05-23 22:25:32
>>pid-1+A8
https://en.wikipedia.org/wiki/Dagger_(mark)
◧◩◪◨
97. astran+Hb[view] [source] [discussion] 2022-05-23 22:26:27
>>umeshu+x7
Google Image Search doesn’t reflect harsh reality when you search for things; it shows you what’s on Pinterest. The same is more likely to apply here than the idea they’re trying to hide something.

There’s no reason to believe their model training learns the same statistics as their input dataset even. If that’s not an explicit training goal then whatever happens happens. AI isn’t magic or more correct than people.

◧◩◪◨
98. john_y+Mb[view] [source] [discussion] 2022-05-23 22:26:52
>>tines+h8
Please be kinder to yourself. You need to be your own strongest advocate, and that's not incompatible with being humble. You have plenty to contribute to this world, and the vast majority of us appreciate what you have to offer.
replies(1): >>Smoosh+Ve
◧◩◪◨
99. slg+Rb[view] [source] [discussion] 2022-05-23 22:27:22
>>bufbup+P9
This type of bias sounds a lot easier to explain away as a non-issue when we are using "nurse" as the hypothetical prompt. What if the prompt is "criminal", "rapist", or some other negative? Would that change your thought process or would you be okay with the system always returning a person of the same race and gender that statistics indicate is the most likely? Do you see how that could be a problem?
replies(3): >>tines+td >>true_r+Mf >>rpmism+Bh
◧◩◪◨⬒⬓⬔
100. rvnx+Wb[view] [source] [discussion] 2022-05-23 22:27:53
>>ceejay+d6
I think the key is to take the information in this world with a little bit pinch of salt.

When you do a search on a search engine, the results are biased too, but still, they shouldn't be artificially censored to fit some political views.

I asked one algorithm few minutes ago (it's called t0pp and it's free to try online, and it's quite fascinating because it's uncensored):

"What is the name of the most beautiful man on Earth ?

- He is called Brad Pitt."

==

Is it true in an objective way ? Probably not.

Is there an actual answer ? Probably yes, there is somewhere a man who scores better than the others.

Is it socially acceptable ? Probably not.

The question is:

If you interviewed 100 persons in the street, and asked the question "What is the name of the most beautiful man on Earth ?".

I'm pretty sure you'd get Brad Pitt often coming in.

Now, what about China ?

We don't have many examples there, they have no clue who is Brad Pitt probably, and there is probably someone else that is considered more beautiful by over 1B people

(t0pp tells me it's someone called "Zhu Zhu" :D )

==

Two solutions:

1) Censorship

-> Sorry there is too much bias in Western and we don't want to offend anyone, no answer, or a generic overriding human answer that is safe for advertisers, but totally useless ("the most beautiful human is you")

2) Adding more examples

-> Work on adding more examples from abroad trying to get the "average human answer".

==

I really prefer solution (2) in the core algorithms and dataset development, rather than going through (1).

(1) is more a choice to make at the stage when you are developing a virtual psychologist or a chat assistant, not when creating AI building blocks.

101. seaman+8c[view] [source] 2022-05-23 22:29:00
>>daenz+(OP)
Yup this is what happens when people who want headlines nitpick for bullshit in a state-of-the-art model which simply reflects the state of the society. Better not to release the model itself than keep explaining over and over how a model is never perfect.
◧◩◪◨⬒
102. codeth+cc[view] [source] [discussion] 2022-05-23 22:29:25
>>tines+i9
While not essential, I wouldn't exactly call the gender "accidental":

> We investigated sex differences in 473,260 adolescents’ aspirations to work in things-oriented (e.g., mechanic), people-oriented (e.g., nurse), and STEM (e.g., mathematician) careers across 80 countries and economic regions using the 2018 Programme for International Student Assessment (PISA). We analyzed student career aspirations in combination with student achievement in mathematics, reading, and science, as well as parental occupations and family wealth. In each country and region, more boys than girls aspired to a things-oriented or STEM occupation and more girls than boys to a people-oriented occupation. These sex differences were larger in countries with a higher level of women's empowerment. We explain this counter-intuitive finding through the indirect effect of wealth. Women's empowerment is associated with relatively high levels of national wealth and this wealth allows more students to aspire to occupations they are intrinsically interested in.

Source: https://psyarxiv.com/zhvre/ (HN discussion: https://news.ycombinator.com/item?id=29040132)

replies(2): >>daenz+af >>astran+Xg
◧◩◪◨
103. joshcr+gc[view] [source] [discussion] 2022-05-23 22:29:43
>>pid-1+A8
It's just a different asterisk to distinguish, in this case, in the paper, they are "core contributors."
◧◩
104. astran+hc[view] [source] [discussion] 2022-05-23 22:29:43
>>6gvONx+x9
The bias on HN is that people who prioritize being nice, or may possibly have humanities degrees or be ultra-libs from SF, are wrong because the correct answer would be cynical and cold-heartedly mechanical.

Other STEM adjacent communities feel similarly but I don’t get it from actual in person engineers much.

replies(1): >>sineno+2A
◧◩◪◨⬒
105. calvin+tc[view] [source] [discussion] 2022-05-23 22:30:48
>>karpie+6b
what if I asked the model to show me a sunday school photograph of baptists in the National Baptist Convention?
replies(1): >>rvnx+Tf
106. ccbccc+uc[view] [source] 2022-05-23 22:31:04
>>daenz+(OP)
In short, the generated images are too gender-challenged-challenged and underrepresent the spectrum of new normalcy!
◧◩
107. seaman+wc[view] [source] [discussion] 2022-05-23 22:31:10
>>riffra+X4
yea but now they aren't giving people more data-points to attack them with such nonsense arguments.
◧◩
108. astran+yc[view] [source] [discussion] 2022-05-23 22:31:20
>>Mizza+H4
Gmail doesn’t read your email for ads anymore. They read it to implement spam filters, and good thing too. Having working spam filters is indeed why they make money though.
◧◩
109. SnowHi+Bc[view] [source] [discussion] 2022-05-23 22:31:47
>>tines+t2
It’s the same as with an artist: “hey artist, draw me a nurse.” “Hmm okay, do you want it a guy or girl?” “Don’t ask me, just draw what I’m saying.” The artist can then say: “Okay, but accept my biases.” or “I can’t since your input is ambiguous.”

For a one-shot generative algorithm you must accept the artist’s biases.

replies(1): >>rvnx+rd
◧◩◪◨⬒
110. ghayes+Jc[view] [source] [discussion] 2022-05-23 22:32:31
>>ar_lan+Ab
Is this going to be hand-rolled? Do you change the prompt you pass to the network to reflect the desired outcomes?
◧◩◪◨⬒⬓⬔⧯
111. daenz+Uc[view] [source] [discussion] 2022-05-23 22:34:03
>>nomel+M9
"Is Taiwan a country" also comes to mind.
replies(1): >>rvnx+Rg
◧◩◪◨
112. tines+Vc[view] [source] [discussion] 2022-05-23 22:34:07
>>adrian+2b
> In fact, I wonder if this sort of technology could reduce the harm caused by people with an interest in disgusting images, because no one needs to be harmed for a realistic image to be created. I am creeping myself out with this line of thinking, but it seems like one potential beneficial - albeit disturbing - outcome.

Interesting idea, but is there any evidence that e.g. consuming disturbing images makes people less likely to act out on disturbing urges? Far from catharsis, I'd imagine consumption of such material to increase one's appetite and likelihood of fulfilling their desires in real life rather than to decrease it.

I suppose it might be hard to measure.

◧◩◪◨⬒
113. SnowHi+1d[view] [source] [discussion] 2022-05-23 22:34:32
>>ar_lan+Ab
No, because a user will see a particular image not the statistically ensemble. It will at times show an Eskimo without a hand because they do statistically exist. But the user definitely does not want that.
◧◩◪◨
114. gknoy+7d[view] [source] [discussion] 2022-05-23 22:35:01
>>adrian+2b
> > ... people could still use it to generate extremely offensive images with little effort. I think people see that as profoundly dangerous. > Do they see it as dangerous? Or just offensive?

I won't speak to whether something is "offensive", but I think that having underlying biases in image-classification or generation has very worrying secondary effects, especially given that organizations like law enforcement want to do things like facial recognition. It's not a perfect analogue, but I could easily see some company pitch a sketch-artist-replacement service that generated images based on someone's description. The potential for having inherent bias present in that makes that kind of thing worrying, especially since the people in charge of buying it are likely to care, or notice, about the caveats.

It does feel like a little bit of a stretch, but at the same time we've also seen such things happen with image classification systems.

◧◩
115. ccbccc+bd[view] [source] [discussion] 2022-05-23 22:35:35
>>swayvi+j6
In discussions like this, I always head for the gray-text comments to enjoy the last crumbs of the common sense in this world.
replies(2): >>ccbccc+ee >>nullc+ne
◧◩◪◨⬒
116. tomp+gd[view] [source] [discussion] 2022-05-23 22:36:41
>>howint+P6
You're confused by the double meaning of the word "bias".

Here we mean mathematical biases.

For example, a good mathematical model will correctly tell you that people in Japan (geographical term) are more likely to be Japanese (ethnic / racial bias). That's not "objectively morally bad", but instead, it's "correct".

replies(2): >>astran+2j >>howint+Tz
◧◩
117. skybri+jd[view] [source] [discussion] 2022-05-23 22:37:12
>>tines+t2
Yes, there is a denominator problem. When selecting a sample "at random," what do you want the denominator to be? It could be "people in the US", "people in the West" (whatever countries you mean by that) or "people worldwide."

Also, getting a random sample of any demographic would be really hard, so no machine learning project is going to do that. Instead you've got a random sample of some arbitrary dataset that's not directly relevant to any particular purpose.

This is, in essence, a design or artistic problem: the Google researchers have some idea of what they want the statistical properties of their image generator to look like. What it does isn't it. So, artistically, the result doesn't meet their standards, and they're going to fix it.

There is no objective, universal, scientifically correct answer about which fictional images to generate. That doesn't all art is equally good, or that you should just ship anything without looking at quality along various axes.

◧◩◪◨⬒
118. pxmpxm+md[view] [source] [discussion] 2022-05-23 22:37:24
>>karpie+6b
> Randomly pick one.

How does the model back out the "certain people would like to pretend it's a fair coin toss that a randomly selected nurse is male or female" feature?

It won't be in any representative training set, so you're back to fishing for stock photos on getty rather than generating things.

replies(1): >>shadow+Xh
◧◩◪
119. notaha+od[view] [source] [discussion] 2022-05-23 22:37:32
>>concor+I9
There's definitely a market for autogenerated porn. But automated porn in a Google branded model for general use around stuff that isn't necessarily intended to be pornographic, on the other hand...
replies(1): >>astran+th
◧◩◪
120. renewi+pd[view] [source] [discussion] 2022-05-23 22:37:53
>>concor+I9
Copenhagen ethics (used by most people) require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue. It is logical for Google to not attempt to interact with porn where possible.
replies(1): >>dragon+ze
◧◩◪
121. rvnx+rd[view] [source] [discussion] 2022-05-23 22:38:08
>>SnowHi+Bc
Revert back to average representation of a nurse (give no weight to unspecified criterias, gender, age, skin-color, religion, country, hair-style, no style whether it's a drawing or a photography, no information about the year it was made, etc).

“hey artist, draw me a nurse.”

“Hmm okay, do you want it a guy or girl?”

“Don’t ask me, just draw what I’m saying.”

- Ok, I'll draw you what an average nurse looks like.

- Wait, it's a woman! She wears a nurse blouse and she has a nurse cap.

- Is it bad ?

- No.

- Ok then what's the problem, you asked for something that looked like a nurse but didn't specify anything else ?

replies(1): >>SnowHi+Si
◧◩◪◨⬒
122. tines+td[view] [source] [discussion] 2022-05-23 22:38:16
>>slg+Rb
Not the person you responded to, but I do see how someone could be hurt by that, and I want to avoid hurting people. But is this the level at which we should do it? Could skewing search results, i.e. hiding the bias of the real world, give us the impression that everything is fine and we don't need to do anything to actually help people?

I have a feeling that we need to be real with ourselves and solve problems and not paper over them. I feel like people generally expect search engines to tell them what's really there instead of what people wish were there. And if the engines do that, people can get agitated!

I'd almost say that hurt feelings are prerequisite for real change, hard though that may be.

These are all really interesting questions brought up by this technology, thanks for your thoughts. Disclaimer, I'm a fucking idiot with no idea what I'm talking about.

replies(2): >>magica+Qg >>slg+Sh
123. planet+vd[view] [source] 2022-05-23 22:38:28
>>daenz+(OP)
Literally the same thing could be said about Google images, but google images is obviously avaliable to the public.

Google knows this will be an unlimited money generator so they're keeping a lid on it.

replies(1): >>jfoste+sc1
◧◩◪
124. ChadNa+wd[view] [source] [discussion] 2022-05-23 22:38:33
>>ceejay+p2
You know, it wouldn't surprise me if people talking about how black curly hair shouldn't be seen as unprofessional contributed to google thinking there's an association between the concepts of "unprofessional hair" and "black curly hair"
replies(2): >>roboca+By >>nearbu+BT
125. ThrowI+Ad[view] [source] 2022-05-23 22:38:46
>>daenz+(OP)
I'm one that welcomes their reasoning. I don't consider myself a social justice kind of guy but I'm not keen on the idea that a tool that is suppose to make life better for everyone has a bias towards one segment of society. This is an important issue(bug?) that needs to be resolved. Specially since there is absolutely no burning reason to release it before it's ready for general use.
◧◩◪◨⬒
126. layer8+Md[view] [source] [discussion] 2022-05-23 22:39:43
>>mdp202+Qa
Humans overwhelmingly learn meaning by use, not by definition.
replies(1): >>mdp202+de
◧◩
127. nullc+Nd[view] [source] [discussion] 2022-05-23 22:39:57
>>6gvONx+x9
It's wild to me that you'd say that. The people complaining (1) aren't following it up with "so we should make sure to restrict the public from internet access entirely". -- that's what would be required to make your juxtaposition make sense.

Moreover, the model doing things like exclusively producing white people when asked to create images of people home brewing beer is "biased" but it's a bias that presumably reflects reality (or at least the internet), if not the reality we'd prefer. Bias means more than "spam and crap", in the ML community bias can also simply mean _accurately_ modeling the underlying distribution when reality falls short of the author's hopes.

For example, if you're interested in learning about what home brewing is the fact that it uses white people would be at least a little unfortunate since there is nothing inherently white and some home brewers aren't white. But if, instead, you wanted to just generate typical home brewing images doing anything but would generate conspicuously unrepresentative images.

But even ignoring the part of the biases which are debatable or of application-specific impact, saying something is unfortunate and saying people should be denied access are entirely different things.

I'll happily delete this comment if you can bring to my attention a single person who has suggested that we lose access to the internet because of spam and crap who has also argued that the release of an internet-biased ML model shouldn't be withheld.

◧◩◪◨⬒⬓
128. mdp202+de[view] [source] [discussion] 2022-05-23 22:42:22
>>layer8+Md
> Humans overwhelmingly learn meaning by use, not by definition

Preliminarily and provisionally. Then, they start discussing their concepts - it is the very definition of Intelligence.

replies(1): >>layer8+Eg
◧◩◪
129. ccbccc+ee[view] [source] [discussion] 2022-05-23 22:42:33
>>ccbccc+bd
... and to witness the downvoters so that their cowardly disgust towards truth could buy them some extra time in hell :)
◧◩◪
130. nullc+ne[view] [source] [discussion] 2022-05-23 22:43:34
>>ccbccc+bd
Get offline and talk to people in meat-space. You're likely to find them to be much more reasonable. :)
replies(1): >>ccbccc+Fe
◧◩◪◨
131. dragon+ze[view] [source] [discussion] 2022-05-23 22:45:24
>>renewi+pd
> Copenhagen ethics (used by most people)

The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.

> require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue.

The conclusion in the final sentence only makes sense if you use “interact” in an incorrect way describing the Copenhagen interpretation of ethics, because the original description is only correct if you include observation as an interaction. By the time you have noted a thing is “high-negativity”, you have observed it and acquired responsibility for it's continuation under the Copenhagen interpretation; you cannot avoid that by choosing not to interact once you have observed it.

replies(2): >>renewi+hf >>Poigna+rg1
◧◩◪◨
132. ccbccc+Fe[view] [source] [discussion] 2022-05-23 22:45:44
>>nullc+ne
Yep, the meat-space is generally a bit less woke than HN, so thanks for the reminder ))
replies(1): >>Semant+Ch2
◧◩◪◨
133. renewi+He[view] [source] [discussion] 2022-05-23 22:46:01
>>curiou+h4
It's not meant to prevent offence to you. It is meant to be a "good product" by the metrics of their creators. And quite simply, everyone here incapable of making the thing is unlikely to have an image of what a "good product" here is. More power to them for having a good vision of what they're building.
◧◩◪◨
134. pxmpxm+Te[view] [source] [discussion] 2022-05-23 22:47:28
>>ma2rte+kb
This, I would imagine this heavily correlates to things like income and gdp per capita.
◧◩◪◨⬒
135. Smoosh+Ve[view] [source] [discussion] 2022-05-23 22:47:29
>>john_y+Mb
Agreed. They are valid points clearly stated and a valuable contribution to the discussion.
◧◩
136. ben_w+3f[view] [source] [discussion] 2022-05-23 22:48:17
>>tines+t2
This sounds like descriptivism vs prescriptivism. In English (native language) I’m a descriptivist, in all other languages I have to tell myself to be a prescriptivist while I’m actively learning and then switch back to descriptivism to notice when the lessons were wrong or misleading.
◧◩◪◨⬒⬓
137. daenz+af[view] [source] [discussion] 2022-05-23 22:48:43
>>codeth+cc
The "Gender Equality Paradox"... there's a fascinating episode[0] about it. It's incredible how unscientific and ideologically-motivated one side comes off in it.

0. https://www.youtube.com/watch?v=_XsEsTvfT-M

◧◩◪◨⬒
138. paisaw+df[view] [source] [discussion] 2022-05-23 22:48:58
>>tines+i9
How do you know this? Because you can, in your mind, divide the function of a nurse from the statistical reality of nursing?

Are the logical divisions you make in your mind really indicative of anything other than your arbitrary personal preferences?

replies(1): >>tines+sg
◧◩◪◨⬒
139. renewi+hf[view] [source] [discussion] 2022-05-23 22:49:30
>>dragon+ze
I'm sure you are capable of steelmanning the argument.
replies(2): >>dragon+Cv >>quickt+F31
◧◩◪◨⬒⬓
140. holmes+kf[view] [source] [discussion] 2022-05-23 22:49:54
>>curiou+h9
> I think that the parents' job is to undo all of the harm done by society and instill in their children the "correct" values.

Far from being too cynical, this is too optimistic.

The vast majority of parents try to instill the value "do not use heroin." And yet society manages to do that harm on a large scale. There are other examples.

◧◩
141. minima+sf[view] [source] [discussion] 2022-05-23 22:50:59
>>joshcr+W4
T5 was open-sourced on release (up to 11B params): https://github.com/google-research/text-to-text-transfer-tra...

It is also available via Hugging Face transformers.

However, the paper mentions T5-XXL is 4.6B, which doesn't fit any of the checkpoints above, so I'm confused.

◧◩◪◨
142. jonny_+zf[view] [source] [discussion] 2022-05-23 22:51:58
>>ma2rte+kb
Right, that's the likely cause of the bias.
143. tyrust+Lf[view] [source] 2022-05-23 22:53:52
>>daenz+(OP)
From the HN rules:

>Eschew flamebait. Avoid unrelated controversies and generic tangents.

They provided a pretty thorough overview (nearly 500 words) of the multiple reasons why they are showing caution. You picked out the one that happened to bother you the most and have posted a misleading claim that the tech is being withheld entirely because of it.

◧◩◪◨⬒
144. true_r+Mf[view] [source] [discussion] 2022-05-23 22:53:57
>>slg+Rb
Cultural biases aren’t uniform across nations. If a prompt returns caucasians for nurses, and other races for criminals then most people in my country would not note that as racism simply because there are not, and there have never in history, been enough caucasians resident for anyone to create significant race theories about them.

This is a far cry from say the USA where that would instantly trigger a response since until the 1960s there was a widespread race based segregation.

◧◩◪◨⬒⬓
145. rvnx+Tf[view] [source] [discussion] 2022-05-23 22:54:38
>>calvin+tc
The pictures I got from a similar model when asking for a "sunday school photograph of baptists in the National Baptist Convention": https://ibb.co/sHGZwh7
replies(1): >>calvin+fg
◧◩
146. colord+Vf[view] [source] [discussion] 2022-05-23 22:54:57
>>6gvONx+x9
Why is it wild? How is it contradictory?
replies(1): >>6gvONx+6j
◧◩◪◨⬒⬓⬔
147. calvin+fg[view] [source] [discussion] 2022-05-23 22:58:52
>>rvnx+Tf
and how do we _feel_ about that outcome?
replies(1): >>andyba+Hf9
◧◩◪◨⬒⬓
148. tines+sg[view] [source] [discussion] 2022-05-23 22:59:55
>>paisaw+df
No, because there's at least one male nurse.
replies(1): >>paisaw+Li
◧◩◪◨⬒
149. paisaw+Bg[view] [source] [discussion] 2022-05-23 23:01:03
>>colinm+o8
> subconscious bias due to historical prejudice does have a large effect on society.

The quality of the evidence for this, as with almost all social science and much of psychology, is extremely low bordering on just certified opinions. I would love to understand why you think otherwise.

> Obviously there are things with much larger effects, that doesn't mean that this doesn't exist.

What a hedge. How should we estimate the size of this effect, so that we can accurately measure whether/when the self-appointed hall monitors are doing more harm than good?

◧◩◪◨⬒⬓⬔
150. layer8+Eg[view] [source] [discussion] 2022-05-23 23:01:22
>>mdp202+de
Most humans don’t do that for most things they have a notion of in their head. It would be much too time consuming to start discussing the meaning of even just a significant fraction of them. For a rough reference point, the English language has over 150.000 words that you could each discuss the meaning of and try to come up with a definition. Not to speak of the difficulties to make that set of definitions noncircular.
replies(1): >>mdp202+Br1
◧◩◪◨⬒⬓
151. magica+Qg[view] [source] [discussion] 2022-05-23 23:03:41
>>tines+td
> Could skewing search results, i.e. hiding the bias of the real world

Which real world? The population you sample from is going to make a big difference. Do you expect it to reflect your day to day life in your own city? Own country? The entire world? Results will vary significantly.

replies(2): >>sangno+dh >>tines+Hh
◧◩◪◨⬒⬓⬔⧯▣
152. rvnx+Rg[view] [source] [discussion] 2022-05-23 23:03:41
>>daenz+Uc
What would a human who can freely speak without morale or being judged say on average after having ingested all the information on the internet ?
◧◩◪◨⬒⬓
153. astran+Xg[view] [source] [discussion] 2022-05-23 23:04:33
>>codeth+cc
If you ask it to generate “nurse” surely the problem isn’t that it’s going to just generate women, it’s that it’s going to give you women in those Halloween sexy nurse costumes.

If it did, would you believe that’s a real representative nurse because an image model gave it to you?

◧◩◪◨
154. sangno+Yg[view] [source] [discussion] 2022-05-23 23:04:46
>>bufbup+P9
> At the end of a day, if you ask for a nurse, should the model output a male or female by default?

This depends on the application. As an example, it would be a problem if it's used as a CV-screening app that's implicitly down-ranking male-applicants to nurse positions, resulting in fewer interviews for them.

◧◩◪◨⬒
155. Shaani+1h[view] [source] [discussion] 2022-05-23 23:04:59
>>joshcr+5a
That's true for some things, but the "gender bias for some professions" is likely to just be reflecting reality.
replies(1): >>joshcr+ts
◧◩◪◨⬒⬓⬔
156. sangno+dh[view] [source] [discussion] 2022-05-23 23:06:41
>>magica+Qg
For AI, "real world" is likely "the world, as seen by Silicon Valley."
◧◩◪◨⬒⬓
157. pxmpxm+nh[view] [source] [discussion] 2022-05-23 23:08:00
>>contin+7a
Ha! However different pxmpxm on github, I'm afraid.
replies(1): >>contin+hi
◧◩◪◨
158. astran+th[view] [source] [discussion] 2022-05-23 23:08:40
>>notaha+od
That’s a difficult product because porn is very personalized and if the product is just a little off in latent space it’s going to turn you off.

Also, people have been commenting assuming Google doesn’t want to offend their users or non-users, but they also don’t want to offend their own staff. If you run a porn company you need to hire people okay with that from the start.

◧◩◪◨⬒
159. rpmism+Bh[view] [source] [discussion] 2022-05-23 23:09:18
>>slg+Rb
It's an unfortunate reflection of reality. There are three possible outcomes:

1. The model provides a reflection of reality, as politically inconvenient and hurtful as it may be.

2. The model provides an intentionally obfuscated version with either random traits or non correlative traits.

3. The model refuses to answer.

Which of these is ideal to you?

replies(1): >>slg+gj
◧◩◪◨⬒
160. Ludwig+Ch[view] [source] [discussion] 2022-05-23 23:09:19
>>tines+i9
Not really what? How does that contradict what I've said?
◧◩◪◨⬒⬓⬔
161. tines+Hh[view] [source] [discussion] 2022-05-23 23:09:39
>>magica+Qg
I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

If I ask for pictures of Japanese people, I'm not shocked when all the results are of Japanese people. If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased but because the real world is biased and we should do something about that. The difference is that I know what set I'm asking for a sample from, and I can react accordingly.

replies(3): >>magica+Al >>nyolfe+Xm >>jfoste+kb1
◧◩◪◨⬒
162. Ludwig+Rh[view] [source] [discussion] 2022-05-23 23:10:31
>>mdp202+Qa
Definition is an entirely artificial construct and doesn't equate to meaning. Definition depends on other words that you also have to understand.
replies(1): >>mdp202+sf1
◧◩◪◨⬒⬓
163. slg+Sh[view] [source] [discussion] 2022-05-23 23:10:32
>>tines+td
>Could skewing search results, i.e. hiding the bias of the real world

Your logic seems to rest on this assumption which I don't think is justified. "Skewing search results" is not the same as "hiding the biases of the real world". Showing the most statistically likely result is not the same as showing the world how it truly is.

A generic nurse is statistically going to be female most of the time. However, a model that returns every nurse as female is not showing the real world as it is. It is exaggerating and reinforcing the bias of the real world. It inherently requires a more advanced model to actually represent the real world. I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

replies(1): >>roboca+Hs
◧◩◪◨⬒⬓
164. shadow+Xh[view] [source] [discussion] 2022-05-23 23:11:22
>>pxmpxm+md
Yep, that's the hard problem Google is not comfortable releasing the API to this until they have it solved.
replies(1): >>zarzav+El
◧◩◪◨⬒⬓⬔⧯
165. astran+7i[view] [source] [discussion] 2022-05-23 23:12:41
>>nomel+M9
These debates often seem to center around “most X in the world” questions, but I’d expect all of those to be unanswerable if you wanted to know the truth. Who’s done a study on it?

In this case you’re (mostly) getting keyword matches and so it’s answering a different question than the one you asked. It would be helpful if a question answering AI gave you the question it decided to answer instead of just pretending it paid full attention to you.

◧◩◪◨⬒⬓⬔
166. contin+hi[view] [source] [discussion] 2022-05-23 23:13:37
>>pxmpxm+nh
That's almost poetic. Watch them attempt to make sense of the situation.
◧◩◪
167. colinm+si[view] [source] [discussion] 2022-05-23 23:14:38
>>concor+I9
Same reason pornhub is a top 10 most visited website but barely makes any money. Being associated with porn is not good for business.
◧◩◪◨⬒⬓⬔
168. paisaw+Li[view] [source] [discussion] 2022-05-23 23:16:48
>>tines+sg
Please don't waste time with this kind of obtuse response. This fact says nothing about why nursing is a female-dominated career. You claim to know that this is just an accidental fact of history or society -- how do you know that?
replies(1): >>tines+Uk
◧◩◪◨
169. SnowHi+Si[view] [source] [discussion] 2022-05-23 23:17:15
>>rvnx+rd
The average nurse has three-halfs of a tit.
replies(1): >>mdp202+0t1
◧◩◪◨⬒⬓
170. astran+2j[view] [source] [discussion] 2022-05-23 23:19:07
>>tomp+gd
Although what you stated is true, it’s actually a short form of a commonly stated untrue statement “98% of Japan is ethnically Japanese”.

1. that comes from a report from 2006.

2. it’s a misreading, it means “Japanese citizens”, and the government in fact doesn’t track ethnicity at all.

Also, the last time I was in Japan (Jan ‘20) there were literally ten times more immigrants everywhere than my previous trip. Japan is full of immigrants from the rest of Asia these days. They all speak perfect Japanese too.

◧◩◪
171. 6gvONx+6j[view] [source] [discussion] 2022-05-23 23:19:44
>>colord+Vf
If these models spit out the data they were trained on and the training data isn’t representative of reality, then they won’t spit out content that’s representative of reality either.

So people shouldn’t say ‘these concerns are just woke people doing dumb woke stuff, but the model is just reflecting reality.’

◧◩◪◨⬒⬓
172. cgreal+8j[view] [source] [discussion] 2022-05-23 23:20:13
>>curiou+h9
Isn't that putting an undue load on parents?

It seems extremely unfair that parents of young black men should have to work extra hard to tell their kids they're not destined to be criminals. Hell, it's not fair on parents of blonde girls to tell their kids they don't have to be just dumb and pretty.

(note: I am deliberately picking bad stereotypes that are pervasive in our culture... I am not in any way suggesting those are true.)

◧◩◪◨⬒⬓
173. slg+gj[view] [source] [discussion] 2022-05-23 23:20:53
>>rpmism+Bh
What makes you think those are the only options? Why can't we have an option that the model returns a range of different outputs based off a prompt?

A model that returns 100% of nurses as female might be statistically more accurate than a model that returns 50% of nurses as female, but it is still not an accurate reflection of the real world. I agree that the model shouldn't return a male nurse 50% of the time. Yet an accurate model needs to be able to occasionally return a male nurse without being directly prompted for a "male nurse". Anything else would also be inaccurate.

replies(1): >>rpmism+rj
◧◩◪◨⬒⬓⬔
174. rpmism+rj[view] [source] [discussion] 2022-05-23 23:22:06
>>slg+gj
So, the model should have a knowledge of political correctness, and return multiple results if the first choice might reinforce a stereotype?
replies(1): >>slg+hk
◧◩◪◨⬒
175. ceeplu+2k[view] [source] [discussion] 2022-05-23 23:28:06
>>howint+P6
Western liberal culture says discriminating against one set of minorities to benefit another (affirmative action) is a good thing. What constitutes a racial and ethnic bias is not objective. And therefore Google shouldn't pretend like it is either.

> from a moral realist perspective we can still objectively judge those cultural norms to be better or worse than alternatives

No, because depending on what set of values you have, it is easy to say that one set of biases is better than another. The entire point is that it should not be Google's role to make that judgement - people should be able to do it for themselves.

◧◩
176. pshc+5k[view] [source] [discussion] 2022-05-23 23:28:23
>>tines+t2
I think it is problematic, yes, to produce a tool trained on data from the past that reinforces old stereotypes. We can’t just handwave it away as being a reflection of its training data. We would like it to do better by humanity. Fortunately the AI people are well aware of the insidious nature of these biases.
◧◩◪◨⬒⬓⬔⧯
177. slg+hk[view] [source] [discussion] 2022-05-23 23:29:24
>>rpmism+rj
I never said anything about political correctness. You implied that you want a model that "provides a reflection of reality". All nurses being female is not "a reflection of reality". It is a distortion of reality because the model doesn't actually understand gender or nurses.
replies(1): >>rpmism+WM
◧◩◪◨⬒⬓⬔⧯
178. tines+Uk[view] [source] [discussion] 2022-05-23 23:35:37
>>paisaw+Li
I meant "accidental" in the Aristotelian sense: https://plato.stanford.edu/entries/essential-accidental/
replies(1): >>paisaw+ox
◧◩◪◨
179. pshc+7l[view] [source] [discussion] 2022-05-23 23:37:19
>>bufbup+P9
Perhaps to avoid this issue, future versions of the model would throw an error like “bias leak: please specify a gender for the nurse at character 32”
◧◩◪◨⬒⬓⬔⧯
180. magica+Al[view] [source] [discussion] 2022-05-23 23:40:56
>>tines+Hh
> If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased

Well the results would unquestionably be biased. All results being black people wouldn't reflect reality at all, and hurting feelings to enact change seems like a poor justification for incorrect results.

> I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

Ok, and let's say I ask for "criminals in Cheyenne Wyoming" and it doesn't know the answer to that, should it just do its best to answer? Seem risky if people are going to get fired up about it and act on this to get "real change".

That seems like a good parallel to what we're talking about here, since it's very unlikely that crime statistics were fed into this image generating model.

◧◩◪◨⬒⬓⬔
181. zarzav+El[view] [source] [discussion] 2022-05-23 23:41:30
>>shadow+Xh
But why is it a problem? The AI is just a mirror showing us ourselves. That’s a good thing. How does it help anyone to make an AI that presents a fake world so that we can pretend that we live in a world that we actually don’t? Disassociation from reality is more dangerous than bias.
replies(3): >>shadow+Ln >>astran+Nx >>Daishi+XH
◧◩◪◨⬒⬓⬔⧯
182. nyolfe+Xm[view] [source] [discussion] 2022-05-23 23:53:02
>>tines+Hh
> If I asked for "criminals in the United States" and all the results are black people,

curiously, this search actually only returns white people for me on GIS

◧◩◪◨⬒⬓⬔⧯
183. shadow+Ln[view] [source] [discussion] 2022-05-23 23:59:02
>>zarzav+El
> The AI is just a mirror showing us ourselves.

That's one hypothesis.

184. sineno+Qq[view] [source] 2022-05-24 00:24:12
>>daenz+(OP)
> Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

Indeed it is. Consider this an early, toy version of the political struggle related to ownership of AI-scientists and AI-engineers of the near future. That is, generally capable models.

I do think the public should have access to this technology, given so much is at stake. Or at least the scientists should be completely, 24/7, open about their R&D. Every prompt that goes into these models should be visible to everyone.

◧◩◪◨⬒⬓
185. joshcr+ts[view] [source] [discussion] 2022-05-24 00:38:15
>>Shaani+1h
Don't really know that, either. They said they didn't do an empirical analysis on it. For example, it may show a few male nurses for hundreds of prompts or it may show none for thousands. They don't give examples. Hopefully they release a paper showing the biases because that would be an interesting discussion.
◧◩◪◨⬒⬓⬔
186. roboca+Hs[view] [source] [discussion] 2022-05-24 00:40:43
>>slg+Sh
> I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

Every model will have some random biases. Some of those random biases will undesirably exaggerate the real world. Every model will undesirably exaggerate something. Therefore no model should be shared.

Your goal is nice, but impractical?

replies(2): >>slg+3w >>barney+A91
◧◩◪◨⬒⬓
187. dragon+Cv[view] [source] [discussion] 2022-05-24 01:06:37
>>renewi+hf
The problem is that, were I inclined to do that, anything I would adjust to make it more true also makes it less relevant.

“There exists an ethical framework—not the Copenhagen interpretation —to which some minority of the population adheres in which trying and failing to a correct a problem incurs retroactive blame for the existence of the problem but seeing it and just saying ‘sucks, but not my problem’ does not,“ is probably true, but not very relevant.

It's logical for Google to avoid involvement with porn, and to be seen doing so, because even though porn is popular involvement with it is nevertheless politically unpopular, and Google’s business interest is in not making itself more attractive as a political punching bag. The popularity of Copenhagen ethics (or their distorted cousins) don't really play into it, just self interest.

◧◩◪◨⬒⬓⬔⧯
188. slg+3w[view] [source] [discussion] 2022-05-24 01:11:18
>>roboca+Hs
Fittingly, your comment fails into the same criticism I had of the model. It shows a refusal/inability to engage with the full complexities of the situation.

I said "It is reasonable... to avoid sharing models". That is an acknowledged that the creators are acting reasonably. It does not imply anything as extreme as "no model should be shared". The only way to get from A to B there is for you to assume that I think there is only one reasonable response and every other possible reaction is unreasonable. Doesn't that seem like a silly assumption?

replies(1): >>roboca+S61
◧◩◪
189. sineno+Zw[view] [source] [discussion] 2022-05-24 01:19:28
>>karpie+b4
> It depends on whether you'd like the model to learn casual or correlative relationships.

I expect that in the practical limit of scale achievable, the regularization pressure inherent to the process of training these models converges to https://en.wikipedia.org/wiki/Minimum_description_length and the correlative relationships become optimized away, leaving mostly true causal relationships inherent to data-generating process.

◧◩◪◨⬒⬓⬔⧯▣
190. paisaw+ox[view] [source] [discussion] 2022-05-24 01:22:31
>>tines+Uk
Yes I understand that. That is only a description of what mental arithmetic you can do if you define your terms arbitrarily conveniently.

"It is possible for a man to provide care" is not the same statement as "it is possible for a sexually dimorphic species in a competitive, capitalistic society (...add more qualifications here) to develop a male-dominated caretaking role"

You're just asserting that you could imagine male nurses without creating a logical contradiction, unlike e.g. circles that have corners. That doesn't mean nursing could be a male-dominated industry under current constraints.

◧◩◪◨⬒⬓⬔⧯
191. astran+Nx[view] [source] [discussion] 2022-05-24 01:25:45
>>zarzav+El
In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play" Minsky shut his eyes, "Why do you close your eyes?", Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened.

The AI doesn’t know what’s common or not. You don’t know if it’s going to be correct unless you’ve tested it. Just assuming whatever it comes out with is right is going to work as well as asking a psychic for your future.

replies(1): >>zarzav+QM
◧◩◪◨⬒
192. astran+ey[view] [source] [discussion] 2022-05-24 01:29:37
>>colinm+o8
> Yes actually, subconscious bias due to historical prejudice does have a large effect on society.

The evidence for implicit bias is pretty weak and IIRC is better explained by people having explicit bias but lying about it when asked.

(Note: this is even worse.)

◧◩◪◨
193. roboca+By[view] [source] [discussion] 2022-05-24 01:33:37
>>ChadNa+wd
You really are not helping that cause.

As a foreigner[], your point confused me anyway, and doing a Google for cultural stuff usually gets variable results. But I did laugh at many of the comments here https://www.reddit.com/r/TooAfraidToAsk/comments/ufy2k4/why_...

[] probably, New Zealand, although foreigner is relative

replies(1): >>ChadNa+5K
◧◩◪◨⬒⬓
194. howint+Tz[view] [source] [discussion] 2022-05-24 01:44:46
>>tomp+gd
Well that's not the issue here, the problem is the examples like searches for images of "unprofessional hair" returning mostly Black people in the results. That is something we can judge as objectively morally bad.
replies(1): >>tomp+K31
◧◩◪
195. sineno+2A[view] [source] [discussion] 2022-05-24 01:45:57
>>astran+hc
Being nice is alright, but why is it that this fundamental drive is so often an uninspiring explanation behind yet another incursion towards one's individual freedom, even if exercising said freedom doesn't bring any real harm to anyone involved?

Maybe the engineers conclude correctly that voicing this concern without the veil of anonymity will do nothing good to their humble livelihood, and thus you don't hear it from them in person.

◧◩
196. sineno+HA[view] [source] [discussion] 2022-05-24 01:51:54
>>meetup+83
This is a stellar idea begging for a new dataset.
◧◩
197. sineno+5B[view] [source] [discussion] 2022-05-24 01:55:05
>>Mockap+f1
You really need a decent infiniband-linked cluster to train large models.
◧◩◪◨
198. JohnBo+uG[view] [source] [discussion] 2022-05-24 02:56:17
>>nomel+W7
Pardon? The snake made of corn most certainly does not reflect reality: snakes made out of corn do not exist.
◧◩◪◨⬒⬓⬔⧯
199. Daishi+XH[view] [source] [discussion] 2022-05-24 03:13:19
>>zarzav+El
The AI is a mirror of the text and image corpora it was presented, as parsed and sanitized by the team in question.
◧◩◪◨⬒
200. ChadNa+5K[view] [source] [discussion] 2022-05-24 03:43:06
>>roboca+By
Haha. I've got some personal experience with that one. I used to live in a house with many other people, and one girl was rastafarian and from jamacia and had dreadlocks, and another girl in the house (who wasn't black) thought that her hairstyle was very offensive. We had to have several conflict resolution meetings about it.

As silly as it seemed, I do think everyone is entitled to their own opinion and I respect the anti-dreadlocks girl for standing up for what she believed in even when most people were against her.

replies(1): >>roboca+UZ3
◧◩◪◨⬒⬓⬔⧯▣
201. zarzav+QM[view] [source] [discussion] 2022-05-24 04:14:16
>>astran+Nx
The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

If they were to weight the training data so that there were an equal number of male and female nurses, then it may well produce male and female nurses with equal probability, but it would also learn an incorrect understanding of the world.

That is quite distinct from weighting the data so that it has a greater correspondence to reality. For example, if Africa is not represented well then weighting training data from Africa more strongly is justifiable.

The point is, it’s not a good thing for us to intentionally teach AIs a world that is idealized and false.

As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections, lest we start to disassociate from reality.

Chinese media (or insert your favorite unfree regime) also presents China as a utopia.

replies(2): >>astran+AN >>shadow+4x1
◧◩◪◨⬒⬓⬔⧯▣
202. rpmism+WM[view] [source] [discussion] 2022-05-24 04:14:57
>>slg+hk
A majority of nurses are women, therefore a woman would be a reasonable representation of a nurse. Obviously that's not a helpful stereotype, because male nurses exist and face challenges due to not fitting the stereotypes. The model is dumb, and outputs what it's seen. Is that wrong?
replies(1): >>webmav+y58
◧◩◪
203. hda2+pN[view] [source] [discussion] 2022-05-24 04:19:49
>>barred+z2
Google AI researchers don't have the final say in what gets published and what doesn't. I think there was a huge controversy when people learned about it last year.
◧◩◪◨⬒⬓⬔⧯▣▦
204. astran+AN[view] [source] [discussion] 2022-05-24 04:22:33
>>zarzav+QM
> The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

No it is not, because you don’t know if it’s been shown each one of its samples the same number of times, or if it overweighted some of its samples more than others. There’s normal reasons both of these would happen.

◧◩◪
205. nearbu+HR[view] [source] [discussion] 2022-05-24 05:06:45
>>jonny_+X5
I don't think we'd want the model to reflect the global statistics. We'd usually want it to reflect our own culture by default, unless it had contextual clues to do something else.

For example, the most eaten foods globally are maize, rice, wheat, cassava, etc. If it always depicted foods matching the global statistics, it wouldn't be giving most users what they expected from their prompt. American users would usually expect American foods, Japanese users would expect Japanese foods, etc.

> Does a bias towards lighter skin represent reality? I was under the impression that Caucasians are a minority globally.

Caucasians specifically are a global minority, but lighter skinned people are not, depending of course on how dark you consider skin to be "lighter skin". Most of the world's population is in Asia, so I guess a model that was globally statistically accurate would show mostly people from there.

◧◩◪◨
206. nearbu+BT[view] [source] [discussion] 2022-05-24 05:27:21
>>ChadNa+wd
That's exactly what's happening. Doing the search from the article of "unprofessional hair for work" brings up images with headlines like "It's ridiculous to say that black women's hair is unprofessional". (In addition to now bringing up images from that article itself and other similar articles comparing Google Images searches.)
replies(1): >>ceejay+Wd2
207. dclowd+JZ[view] [source] 2022-05-24 06:32:41
>>daenz+(OP)
Even as a pretty left leaning person, I gotta agree. We should see AI’s pollution by human shortcoming akin to the fact that our world is the product of many immoralities that came before us. It sucks that they ever existed, but we should understand that the results are, by definition, a product of the past, and let them live in that context.
◧◩◪
208. alexb_+K11[view] [source] [discussion] 2022-05-24 06:53:37
>>godels+F4
Well, if you showed that pixelated image to someone who has never seen Obama - would they make him white? I think so.
◧◩◪◨⬒⬓
209. quickt+F31[view] [source] [discussion] 2022-05-24 07:09:56
>>renewi+hf
Maybe: Most peoples morals require that all negative outcomes of a thing X become yours if you interact with X.

I am not sure of the evidence but that would seem almost right.

Except for, for example a story I read where a couple lost their housing deposit due to a payment timing issue. They used a lawyer and were not doing anything “fancy” like buying via a holding company. They interacted with “buying a house”, so is this just tough shit because they interacted with X.

That sounds like the original Bitcoin “not your keys not your coin” kind of morality.

I don’t think I can figure out the steel man.

◧◩◪◨⬒⬓⬔
210. tomp+K31[view] [source] [discussion] 2022-05-24 07:10:48
>>howint+Tz
Did you see the image in the linked article? Clearly the “unprofessional hair” are people with curly hair. Some are white! It’s not the algorithm’s fault that P(curly|black) > P(curly|white).
replies(1): >>howint+Uv2
◧◩◪◨⬒⬓⬔⧯▣
211. roboca+S61[view] [source] [discussion] 2022-05-24 07:40:59
>>slg+3w

  “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

  ’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

  ’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”
◧◩◪◨⬒⬓⬔⧯
212. barney+A91[view] [source] [discussion] 2022-05-24 08:09:53
>>roboca+Hs
> Your goal is nice, but impractical?

If the only way to do AI is to encode racism etc, then we shouldn't be doing AI at all.

◧◩◪◨⬒⬓⬔⧯
213. jfoste+kb1[view] [source] [discussion] 2022-05-24 08:23:42
>>tines+Hh
In a way, if the model brings back an image for "criminals in the United States" that isn't based on the statistical reality, isn't it essentially complicit in sweeping a major social issue under the rug?

We may not like what it shows us, but blindfolding ourselves is not the solution to that problem.

replies(1): >>webmav+448
◧◩
214. jfoste+sc1[view] [source] [discussion] 2022-05-24 08:33:02
>>planet+vd
Given that there's already many competing models in this space prior to any of them having been brought to market, it seems more likely that it will be commoditized.
replies(1): >>planet+1S6
◧◩◪◨⬒⬓
215. mdp202+sf1[view] [source] [discussion] 2022-05-24 09:05:40
>>Ludwig+Rh
You are thinking of the literal definition - that "made of literal letters".

Mental definition is that "«artificial»" (out of the internal processing) construct made of relations that reconstructs a meaning. Such ontology is logical - "this is that". (It would not be made of memories, which are processed, deconstructed.)

Concepts are internally refined: their "implicit" definition (a posterior reading of the corresponding mental low-level) is refined.

◧◩◪◨⬒
216. Poigna+rg1[view] [source] [discussion] 2022-05-24 09:15:10
>>dragon+ze
> The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.

I don't have any evidence, but my personal experience is that it feels correct, at least on the internet.

People seem to have a "you touch it, you take responsibility for it" mindset regarding ethical issues. I think it's pretty reasonable to assume that Google execs are assuming "If anything bad happens because of AI, we'll be blamed for it".

◧◩◪
217. drdeca+or1[view] [source] [discussion] 2022-05-24 10:58:41
>>karpie+b4
The meaning of the word "nurse" is determined by how the word "nurse" is used and understood.

Perhaps what "nurse" means isn't what "nurse" should mean, but what people mean when they say "nurse" is what "nurse" means.

◧◩◪◨⬒⬓⬔⧯
218. mdp202+Br1[view] [source] [discussion] 2022-05-24 10:59:55
>>layer8+Eg
(Mental entities are very many more than the hundred thousand, out of composition, cartesianity etc. So-called "protocols" (after logical positivism) are part of them, relating more entities with space and time. Also, by speaking of "circular definitions" you are, like others, confusing mental definitions with formal definitions.)

So? Draw your consequences.

Following what was said, you are stating that "a staggering large number of people are unintelligent". Well, ok, that was noted. Scolio: if unintelligent, they should refrain from expressing judgement (you are really stating their non-judgement), why all the actual expression? If unintelligent actors, they are liabilities, why this overwhelming employment in the job market?

Thing is, as unintelligent as you depict them quantitatively, the internal processing that constitutes intelligence proceeds in many even when scarce, even when choked by some counterproductive bad formation - processing is the natural functioning. And then, the right Paretian side will "do the job" that the vast remainder will not do, and process notions actively (more, "encouragingly" - the process is importantly unconscious, many low-level layers are) and proficiently.

And the very Paretian prospect will reveal, there will be a number of shallow takes, largely shared, on some idea, and other intensively more refined takes, more rare, on the same idea. That shows you a distinction between "use" and the asymptotic approximation to meanings as achieved by intellectual application.

◧◩◪◨⬒
219. mdp202+0t1[view] [source] [discussion] 2022-05-24 11:11:46
>>SnowHi+Si
Is it not incredible that after so many decades talking about local minima there is now some supposition that all of them must merge?
◧◩◪◨⬒⬓⬔⧯▣▦
220. shadow+4x1[view] [source] [discussion] 2022-05-24 11:45:08
>>zarzav+QM
> As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections...

Is it? I'm reminded of the Microsoft Tay experiment, were they attempted to train an AI by letting Twitter users interact with it.

The result was a non-viable mess that nobody liked.

◧◩◪◨⬒
221. ceejay+Wd2[view] [source] [discussion] 2022-05-24 15:33:00
>>nearbu+BT
You’re getting cause and effect backwards. The coverage of this changed the results, as did Google’s ensuing interventions.
replies(1): >>nearbu+Zy3
◧◩◪◨⬒
222. Semant+Ch2[view] [source] [discussion] 2022-05-24 15:51:43
>>ccbccc+Fe
Smoking these meats! https://youtu.be/YeemJlrNx2Q
replies(1): >>ccbccc+Iw2
◧◩◪
223. Semant+ni2[view] [source] [discussion] 2022-05-24 15:55:15
>>nomel+47
Pre woke tools, wouldn't have been allowed nowadays.
◧◩◪
224. visarg+kt2[view] [source] [discussion] 2022-05-24 16:42:33
>>godels+F4
> some community members focus on finding these holes but not fixing them

That's what bothered me the most in Timnit's crusade. Throw the baby with the bath water!

◧◩◪◨⬒⬓⬔⧯
225. howint+Uv2[view] [source] [discussion] 2022-05-24 16:54:06
>>tomp+K31
It absolutely is the responsibility of the people making the algorithm available to the general public.
◧◩◪◨⬒⬓
226. ccbccc+Iw2[view] [source] [discussion] 2022-05-24 16:57:39
>>Semant+Ch2
Smoking them meats with his wifi! That explains some obvious anomalies in the meat-space pretty neatly)
◧◩◪◨⬒⬓
227. nearbu+Zy3[view] [source] [discussion] 2022-05-24 23:05:15
>>ceejay+Wd2
I don't think so. You can set the search options to only find images published before the article, and even find some of the original images.

One image links to the 2015 article, "It's Ridiculous To Say Black Women's Natural Hair Is 'Unprofessional'!". The Guardian article on the Google results is from 2016.

Another image has the headline, "5 Reasons Natural Hair Should NOT be Viewed as Unprofessional - BGLH Marketplace" (2012).

Another: "What to Say When Someone Calls Your Hair Unprofessional".

Also, have you noticed how good and professional the black women in the Guardian's image search look? Most of them look like models with photos taken by professional photographers. Their hair is meticulously groomed and styled. This is not the type of photo an article would use to show "unprofessional hair". But it is the type of photo the above articles opted for.

◧◩◪◨⬒⬓
228. roboca+UZ3[view] [source] [discussion] 2022-05-25 03:22:57
>>ChadNa+5K
> thought that her hairstyle was very offensive

Telling others they don’t like how others look is right near the top on the scale of offensiveness. I had a partner who had had dreads for 25 years. I’m wasn’t a huge fan of her dreads because although I like the look, hers were somewhat annoying for me (scratchy, dread babies, me getting tangled). That said, I would hope I never tell any other person how to look. Hilarious when she was working, and someone would treat her badly due to their assumptions or prejudices, only to discover to their detriment she was very senior staff!

Dreadlocks are usually called dreads in NZ. My previous link mentions that some people call them locks, which seems inapproprate to me: kind of a confusing whitewashing denial of history.

◧◩◪◨⬒
229. nmfish+WN5[view] [source] [discussion] 2022-05-25 16:56:52
>>karpie+6b
What about preschool teacher?

I say this because I’ve been visiting a number of childcare centres over the past few days and I still have yet to see a single male teacher.

◧◩◪
230. planet+1S6[view] [source] [discussion] 2022-05-25 22:06:39
>>jfoste+sc1
The winner will be a model that bypasses the automatic fake image detection algorithms that will be added to every social media site
◧◩
231. webmav+AM7[view] [source] [discussion] 2022-05-26 05:02:47
>>tines+t2
> We certainly don't want to perpetuate harmful stereotypes. But is it a flaw that the model encodes the world as it really is, statistically, rather than as we would like it to be? By this I mean that there are more light-skinned people in the west than dark, and there are more women nurses than men, which is reflected in the model's training data. If the model only generates images of female nurses, is that a problem to fix, or a correct assessment of the data?

If the model only generated images of female nurses, then it is not representative of the real world, because male nurses exist and they deserve to not be erased. The training data is the proximate causes here, but one wonders what process ended up distorting "most nurses are female" into "nearly all nurse photos are of female nurses" something amplified a real world imbalance into a dataset that exhibited more bias than the real world, and then training the AI bakes that bias into an algorithm (that may end up further reinforcing the bias in the real world depending on the use-cases).

◧◩◪◨
232. webmav+X28[view] [source] [discussion] 2022-05-26 07:53:26
>>bufbup+P9
> If the input text lacks context/nuance, then the model must have some bias to infer the user's intent. This holds true for any image it generates; not just the politically sensitive ones. For example, if I ask for a picture of a person, and don't get one with pink hair, is that a shortcoming of the model?

You're ignoring that these models are stochastic. If I ask for a nurse and always get an image of a woman in scrubs, then yes, the model exhibits bias. If I get a male nurse half the time, we can say the model is unbiased WRT gender, at least. The same logic applies to CEOs always being old white men, criminals always being Black men, and so on. Stochastic models can output results that when aggregated exhibit a distribution from which we can infer bias or the lack thereof.

◧◩◪◨⬒⬓⬔⧯▣
233. webmav+448[view] [source] [discussion] 2022-05-26 08:04:34
>>jfoste+kb1
At the very least we should expect that the results not be more biased than reality. Not all criminals are Black. Not all are men. Not all are poor. If the model (which is stochastic) only outputs poor Black men, rather than a distribution that is closer to reality, it is exhibiting bias and it is fair to ask why the data it picked that bias up from is not reflective of reality.
replies(1): >>jfoste+i58
◧◩◪◨⬒⬓⬔⧯▣▦
234. jfoste+i58[view] [source] [discussion] 2022-05-26 08:18:15
>>webmav+448
Yeah, it makes sense for the results to simply reflect reality as closely as possible. No bias in any direction is desirable.
replies(1): >>webmav+j5b
◧◩◪◨⬒⬓⬔⧯▣▦
235. webmav+y58[view] [source] [discussion] 2022-05-26 08:21:45
>>rpmism+WM
It isn't wrong, but we aren't talking about the model somehow magically transcending the data it's seen. We're talking about making sure the data it sees is representative, so the results it outputs are as well.

Given that male nurses exist (and though less common, certainly aren't rare), why has the model apparently seen so few?

There actually is a fairly simple explanation: because the images it has seen labelled "nurse" are more likely from stock photography sites rather than photos of actual nurses, and stock photography is often stereotypical rather than typical.

◧◩◪◨
236. webmav+a88[view] [source] [discussion] 2022-05-26 08:52:52
>>adrian+2b
> I can understand why people wouldn’t want a tool they have created to be used to generate disturbing, offensive or disgusting imagery. But I don’t really see how doing that would be dangerous.

Propaganda can be extremely dangerous. Limiting or discouraging the use of powerful new tools for unsavory purposes such as creating deliberately biased depictions for propaganda purposes is only prudent. Ultimately it will probably require filtering of the prompts being used in much the same way that Google filters search queries.

◧◩◪◨
237. webmav+za8[view] [source] [discussion] 2022-05-26 09:20:57
>>umeshu+x7
> their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.

There is a difference between probably and invariably. Would it be so hard for the model to show male nurses at least some of the time?

◧◩◪◨⬒⬓⬔⧯
238. andyba+Hf9[view] [source] [discussion] 2022-05-26 16:38:46
>>calvin+fg
It's gone now. What was it?
◧◩◪◨⬒⬓⬔⧯▣▦▧
239. webmav+j5b[view] [source] [discussion] 2022-05-27 09:05:19
>>jfoste+i58
Sarcasm, eh? At least there's no way THAT could be taken the wrong way.
[go to top]