zlacker

[parent] [thread] 56 comments
1. user39+(OP)[view] [source] 2022-05-23 21:28:28
Translation: we need to hand-tune this to not reflect reality but instead the world as we (Caucasian/Asian male American woke upper-middle class San Fransisco engineers) wish it to be.

Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.

replies(7): >>ceejay+Y >>barred+81 >>josho+42 >>Ar-Cur+R2 >>JohnBo+X3 >>userbi+19 >>holmes+U9
2. ceejay+Y[view] [source] 2022-05-23 21:33:21
>>user39+(OP)
"Reality" as defined by the available training set isn't necessarily reality.

For example, Google's image search results pre-tweaking had some interesting thoughts on what constitutes a professional hairstyle, and that searches for "men" and "women" should only return light-skinned people: https://www.theguardian.com/technology/2016/apr/08/does-goog...

Does that reflect reality? No.

(I suspect there are also mostly unstated but very real concerns about these being used as child pornography, revenge porn, "show my ex brutally murdered" etc. generators.)

replies(4): >>ceeplu+t1 >>rvnx+L2 >>userbi+M9 >>ChadNa+5c
3. barred+81[view] [source] 2022-05-23 21:34:22
>>user39+(OP)
I know you're anon trolling, but the authors' names are:

Chitwan Saharia, William Chan, Saurabh Saxena†, Lala Li†, Jay Whang†, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho†, David Fleet†, Mohammad Norouzi

replies(2): >>pid-1+97 >>hda2+YL
◧◩
4. ceeplu+t1[view] [source] [discussion] 2022-05-23 21:36:17
>>ceejay+Y
The reality is that hair styles on the left side of the image in the article are widely considered unprofessional in today's workplaces. That may seem egregiously wrong to you, but it is a truth of American and European society today. Should it be Google's job to rewrite reality?
replies(3): >>ceejay+T1 >>rcMgD2+23 >>colinm+r6
◧◩◪
5. ceejay+T1[view] [source] [discussion] 2022-05-23 21:38:31
>>ceeplu+t1
The "unprofessional" results are almost exclusively black women; the "professional" ones are almost exclusively white or light skinned.

Unless you think white women are immune to unprofessional hairstyles, and black women incapable of them, there's a race problem illustrated here even if you think the hairstyles illustrated are fairly categorized.

replies(1): >>rvnx+04
6. josho+42[view] [source] 2022-05-23 21:39:05
>>user39+(OP)
Translation: AI has the potential to transform society. When we release this model to the public it will be used in ways we haven’t anticipated. We know the model has bias and we need more time to consider releasing this to the public out of concerns that this transformative technology further perpetuate mistakes that we’ve made in our recent past.
replies(1): >>curiou+Q2
◧◩
7. rvnx+L2[view] [source] [discussion] 2022-05-23 21:43:41
>>ceejay+Y
If your query was about hairstyle, why do you even look or care about the skin color ?

Nowhere there is any precision for a preferred skin color in the query of th user.

So it sorts and gives the most average examples based on the examples that were found on the internet.

Essentially answering the query "SELECT * FROM `non-professional hairstyles` ORDER BY score DESC LIMIT 10".

It's like if you search on Google "best place for wedding night".

You may get 3 places out of 10 in Santorini, Greece.

Yes you could have an human remove these biases because you feel that Sri Lanka is the best place for a wedding, but what if there is a consensus that Santorini is really the most appraised in the forums or websites that were crawled by Google ?

replies(3): >>ceejay+W2 >>jayd16+R4 >>colinm+g6
◧◩
8. curiou+Q2[view] [source] [discussion] 2022-05-23 21:44:13
>>josho+42
> it will be used in ways we haven’t anticipated

Oh yeah, as a woman who grew up in a Third World country, how an AI model generates images would have deeply affected my daily struggles! /s

It's kinda insulting that they think that this would be insulting. Like "Oh no I asked the model to draw a doctor and it drew a male doctor, I guess there's no point in me pursuing medical studies" ...

replies(4): >>boppo1+s5 >>pxmpxm+F5 >>colinm+X6 >>renewi+gd
9. Ar-Cur+R2[view] [source] 2022-05-23 21:44:16
>>user39+(OP)
Except "reality" in this case is just their biased training set. E.g. There's more non-white doctors and nurses in the world than white ones, yet their model would likely show an image of white person when you type in "doctor".
replies(1): >>umeshu+66
◧◩◪
10. ceejay+W2[view] [source] [discussion] 2022-05-23 21:44:50
>>rvnx+L2
> The algorithm is just ranking the top "non-professional hairstyle" in the most neutral way in its database

You're telling me those are all the most non-professional hairstyles available? That this is a reasonable assessment? That fairly standard, well-kept, work-appropriate curly black hair is roughly equivalent to the pink-haired, three-foot-wide hairstyle that's one of the only white people in the "unprofessional" search?

Each and everyone of them is less workplace appropriate than, say, http://www.7thavenuecostumes.com/pictures/750x950/P_CC_70594... ?

replies(1): >>rvnx+W4
◧◩◪
11. rcMgD2+23[view] [source] [discussion] 2022-05-23 21:45:13
>>ceeplu+t1
In any case, Google will be writing their reality. Who picked the image sample for the ML to run on, if not Google? What's the problem with writing it again, then? They know their biases and want to act on it.

It's like blaming a friend for trying to phrase things nicely, and telling them to speak headlong with zero concern for others instead. Unless you believe anyone trying to do good is being hypocrite…

I, for one, like civility.

12. JohnBo+X3[view] [source] 2022-05-23 21:50:15
>>user39+(OP)

    Translation: we need to hand-tune this to not reflect reality
Is it reflecting reality, though?

Seems to me that (as with any ML stuff, right?) it's reflecting the training corpus.

Futhermore, is it this thing's job to reflect reality?

    the world as we (Caucasian/Asian male American woke 
    upper-middle class San Fransisco engineers) wish it to be
Snarky answer: Ah, yes, let's make sure that things like "A giant cobra snake on a farm. The snake is made out of corn" reflect reality.

Heartfelt answer: Yes, there is some of that wishful thinking or editorializing. I don't consider it to be erasing or denying reality. This is a tool that synthesizes unreality. I don't think that such a tool should, say, refuse to synthesize an image of a female POTUS because one hasn't existed yet. This is art, not a reporting tool... and keep in mind that art not only imitates life but also influences it.

replies(1): >>nomel+v6
◧◩◪◨
13. rvnx+04[view] [source] [discussion] 2022-05-23 21:50:32
>>ceejay+T1
If you type as a prompt "most beautiful woman in the world", you get a brown-skinned brown-haired woman with hazel eyes.

What should be the right answer then ?

You put a blonde, you offend the brown haired.

You put blue eyes, you offend the brown eyes.

etc.

replies(1): >>ceejay+M4
◧◩◪◨⬒
14. ceejay+M4[view] [source] [discussion] 2022-05-23 21:55:16
>>rvnx+04
That's an unanswerable question. Perhaps the answer is "don't".

Siri takes this approach for a wide range of queries.

replies(2): >>nomel+l8 >>rvnx+va
◧◩◪
15. jayd16+R4[view] [source] [discussion] 2022-05-23 21:55:29
>>rvnx+L2
The results are not inherently neutral because the database is from non-neutral input.

It's a simple case of sample bias.

◧◩◪◨
16. rvnx+W4[view] [source] [discussion] 2022-05-23 21:55:44
>>ceejay+W2
I'm saying that the dataset needs to be expanded to cover the most examples possible.

Work a lot on adding even more examples, in order to make the algorithms as close as possible to the "average reality".

At some point we may even ultimately reach the state that the robots even collect intelligence directly in the real world, and not on the internet (even closer to reality).

Censoring results sounds the best recipe for a dystopian world where only one view is right.

◧◩◪
17. boppo1+s5[view] [source] [discussion] 2022-05-23 21:58:27
>>curiou+Q2
I don't think the concern over offense is actually about you. There's a metagame here which is that if it could potentially offend you (third-world-originated-woman), then there's a brand-image liability for the company. I don't think they care about you, I think they care about not being hit on as "the company that algorithmically identifies black people as gorillas".
◧◩◪
18. pxmpxm+F5[view] [source] [discussion] 2022-05-23 21:59:32
>>curiou+Q2
Postmodernism is what postmodernism does.
replies(1): >>contin+G8
◧◩
19. umeshu+66[view] [source] [discussion] 2022-05-23 22:02:06
>>Ar-Cur+R2
Alternately, there are more females nurses in the world than male nurses, and their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.
replies(3): >>contin+T8 >>astran+ga >>webmav+898
◧◩◪
20. colinm+g6[view] [source] [discussion] 2022-05-23 22:03:09
>>rvnx+L2
> If your query was about hairstyle, why do you even look at the skin color ?

You know that race has a large effect on hair right?

replies(1): >>daenz+n7
◧◩◪
21. colinm+r6[view] [source] [discussion] 2022-05-23 22:04:20
>>ceeplu+t1
Only black people have unprofessional hair and only white people have professional hair is not reality.
◧◩
22. nomel+v6[view] [source] [discussion] 2022-05-23 22:04:30
>>JohnBo+X3
> Snarky answer: Ah, yes, let's make sure that things like "A giant cobra snake on a farm. The snake is made out of corn" reflect reality.

If it didn't reflect reality, you wouldn't be impressed by the image of the snake made of corn.

replies(1): >>JohnBo+3F
◧◩◪
23. colinm+X6[view] [source] [discussion] 2022-05-23 22:07:09
>>curiou+Q2
Yes actually, subconscious bias due to historical prejudice does have a large effect on society. Obviously there are things with much larger effects, that doesn't mean that this doesn't exist.

> Oh no I asked the model to draw a doctor and it drew a male doctor, I guess there's no point in me pursuing medical studies

If you don't think this is a real thing that happens to children you're not thinking especially hard. It doesn't have to be common to be real.

replies(3): >>curiou+Q7 >>paisaw+af >>astran+Nw
◧◩
24. pid-1+97[view] [source] [discussion] 2022-05-23 22:08:14
>>barred+81
Absolutely not related to the whole discussion, but what do "†" stands for?
replies(2): >>dntrkv+ca >>joshcr+Pa
◧◩◪◨
25. daenz+n7[view] [source] [discussion] 2022-05-23 22:09:31
>>colinm+g6
I'd be careful where you're going with that. You might make a point that is the opposite of what you intended.
◧◩◪◨
26. curiou+Q7[view] [source] [discussion] 2022-05-23 22:11:10
>>colinm+X6
> If you don't think this is a real thing that happens to children you're not thinking especially hard

I believe that's where parenting comes in. Maybe I'm too cynical but I think that the parents' job is to undo all of the harm done by society and instill in their children the "correct" values.

replies(3): >>colinm+79 >>holmes+Td >>cgreal+Hh
◧◩◪◨⬒⬓
27. nomel+l8[view] [source] [discussion] 2022-05-23 22:14:09
>>ceejay+M4
How do you pick what should and shouldn't be restricted? Is there some "offense threshold"? I suspect all queries relating to religion, ethnicity, sexuality, and gender will need to be restricted, which almost certainly means you probably can't include humans at all, other than ones artificially inserted with mathematically proven random attributes. Maybe that's why none are in this demo.
replies(2): >>daenz+tb >>astran+Gg
◧◩◪◨
28. contin+G8[view] [source] [discussion] 2022-05-23 22:16:04
>>pxmpxm+F5
Love it. Added to https://github.com/globalcitizen/taoup
replies(1): >>pxmpxm+Wf
◧◩◪
29. contin+T8[view] [source] [discussion] 2022-05-23 22:16:57
>>umeshu+66
@Google Brain Toronto Team: See what you get when you generate nurses with ncurses.
30. userbi+19[view] [source] 2022-05-23 22:17:57
>>user39+(OP)
Indeed. As the saying goes, we are truly living in a post-truth world.
◧◩◪◨⬒
31. colinm+79[view] [source] [discussion] 2022-05-23 22:18:25
>>curiou+Q7
I'd say you're right. Unfortunately many people are raised by bad parents. Should these researchers accept that their work may perpetuate stereotypes that harm those that most need help? I can see why they wouldn't want that.
◧◩
32. userbi+M9[view] [source] [discussion] 2022-05-23 22:23:00
>>ceejay+Y
unstated but very real concerns

I say let people generate their own reality. The sooner the masses realise that ceci n'est pas une pipe , the less likely they are to be swayed by the growing un-reality created by companies like Google.

33. holmes+U9[view] [source] 2022-05-23 22:24:01
>>user39+(OP)
"As we wish it to be" is not totally true, because there are some places where humanity's iconographic reality (which Imagen trains on) differs significantly from actual reality.

One example would be if Imagen draws a group of mostly white people when you say "draw a group of people". This doesn't reflect actual reality. Another would be if Imagen draws a group of men when you say "draw a group of doctors".

In these cases where iconographic reality differs from actual reality, hand-tuning could be used to bring it closer to the real world, not just the world as we might wish it to be!

I agree there's a problem here. But I'd state it more as "new technologies are being held to a vastly higher standard than existing ones." Imagine TV studios issuing a moratorium on any new shows that made being white (or rich) seem more normal than it was! The public might rightly expect studios to turn the dials away from the blatant biases of the past, but even if this would be beneficial the progressive and activist public is generations away from expecting a TV studio to not release shows until they're confirmed to be bias-free.

That said, Google's decision to not publish is probably less about the inequities in AI's representation of reality and more about the AI sometimes spitting out drawings that are offensive in the US, like racist caricatures.

◧◩◪
34. dntrkv+ca[view] [source] [discussion] 2022-05-23 22:25:32
>>pid-1+97
https://en.wikipedia.org/wiki/Dagger_(mark)
◧◩◪
35. astran+ga[view] [source] [discussion] 2022-05-23 22:26:27
>>umeshu+66
Google Image Search doesn’t reflect harsh reality when you search for things; it shows you what’s on Pinterest. The same is more likely to apply here than the idea they’re trying to hide something.

There’s no reason to believe their model training learns the same statistics as their input dataset even. If that’s not an explicit training goal then whatever happens happens. AI isn’t magic or more correct than people.

◧◩◪◨⬒⬓
36. rvnx+va[view] [source] [discussion] 2022-05-23 22:27:53
>>ceejay+M4
I think the key is to take the information in this world with a little bit pinch of salt.

When you do a search on a search engine, the results are biased too, but still, they shouldn't be artificially censored to fit some political views.

I asked one algorithm few minutes ago (it's called t0pp and it's free to try online, and it's quite fascinating because it's uncensored):

"What is the name of the most beautiful man on Earth ?

- He is called Brad Pitt."

==

Is it true in an objective way ? Probably not.

Is there an actual answer ? Probably yes, there is somewhere a man who scores better than the others.

Is it socially acceptable ? Probably not.

The question is:

If you interviewed 100 persons in the street, and asked the question "What is the name of the most beautiful man on Earth ?".

I'm pretty sure you'd get Brad Pitt often coming in.

Now, what about China ?

We don't have many examples there, they have no clue who is Brad Pitt probably, and there is probably someone else that is considered more beautiful by over 1B people

(t0pp tells me it's someone called "Zhu Zhu" :D )

==

Two solutions:

1) Censorship

-> Sorry there is too much bias in Western and we don't want to offend anyone, no answer, or a generic overriding human answer that is safe for advertisers, but totally useless ("the most beautiful human is you")

2) Adding more examples

-> Work on adding more examples from abroad trying to get the "average human answer".

==

I really prefer solution (2) in the core algorithms and dataset development, rather than going through (1).

(1) is more a choice to make at the stage when you are developing a virtual psychologist or a chat assistant, not when creating AI building blocks.

◧◩◪
37. joshcr+Pa[view] [source] [discussion] 2022-05-23 22:29:43
>>pid-1+97
It's just a different asterisk to distinguish, in this case, in the paper, they are "core contributors."
◧◩◪◨⬒⬓⬔
38. daenz+tb[view] [source] [discussion] 2022-05-23 22:34:03
>>nomel+l8
"Is Taiwan a country" also comes to mind.
replies(1): >>rvnx+qf
◧◩
39. ChadNa+5c[view] [source] [discussion] 2022-05-23 22:38:33
>>ceejay+Y
You know, it wouldn't surprise me if people talking about how black curly hair shouldn't be seen as unprofessional contributed to google thinking there's an association between the concepts of "unprofessional hair" and "black curly hair"
replies(2): >>roboca+ax >>nearbu+aS
◧◩◪
40. renewi+gd[view] [source] [discussion] 2022-05-23 22:46:01
>>curiou+Q2
It's not meant to prevent offence to you. It is meant to be a "good product" by the metrics of their creators. And quite simply, everyone here incapable of making the thing is unlikely to have an image of what a "good product" here is. More power to them for having a good vision of what they're building.
◧◩◪◨⬒
41. holmes+Td[view] [source] [discussion] 2022-05-23 22:49:54
>>curiou+Q7
> I think that the parents' job is to undo all of the harm done by society and instill in their children the "correct" values.

Far from being too cynical, this is too optimistic.

The vast majority of parents try to instill the value "do not use heroin." And yet society manages to do that harm on a large scale. There are other examples.

◧◩◪◨
42. paisaw+af[view] [source] [discussion] 2022-05-23 23:01:03
>>colinm+X6
> subconscious bias due to historical prejudice does have a large effect on society.

The quality of the evidence for this, as with almost all social science and much of psychology, is extremely low bordering on just certified opinions. I would love to understand why you think otherwise.

> Obviously there are things with much larger effects, that doesn't mean that this doesn't exist.

What a hedge. How should we estimate the size of this effect, so that we can accurately measure whether/when the self-appointed hall monitors are doing more harm than good?

◧◩◪◨⬒⬓⬔⧯
43. rvnx+qf[view] [source] [discussion] 2022-05-23 23:03:41
>>daenz+tb
What would a human who can freely speak without morale or being judged say on average after having ingested all the information on the internet ?
◧◩◪◨⬒
44. pxmpxm+Wf[view] [source] [discussion] 2022-05-23 23:08:00
>>contin+G8
Ha! However different pxmpxm on github, I'm afraid.
replies(1): >>contin+Qg
◧◩◪◨⬒⬓⬔
45. astran+Gg[view] [source] [discussion] 2022-05-23 23:12:41
>>nomel+l8
These debates often seem to center around “most X in the world” questions, but I’d expect all of those to be unanswerable if you wanted to know the truth. Who’s done a study on it?

In this case you’re (mostly) getting keyword matches and so it’s answering a different question than the one you asked. It would be helpful if a question answering AI gave you the question it decided to answer instead of just pretending it paid full attention to you.

◧◩◪◨⬒⬓
46. contin+Qg[view] [source] [discussion] 2022-05-23 23:13:37
>>pxmpxm+Wf
That's almost poetic. Watch them attempt to make sense of the situation.
◧◩◪◨⬒
47. cgreal+Hh[view] [source] [discussion] 2022-05-23 23:20:13
>>curiou+Q7
Isn't that putting an undue load on parents?

It seems extremely unfair that parents of young black men should have to work extra hard to tell their kids they're not destined to be criminals. Hell, it's not fair on parents of blonde girls to tell their kids they don't have to be just dumb and pretty.

(note: I am deliberately picking bad stereotypes that are pervasive in our culture... I am not in any way suggesting those are true.)

◧◩◪◨
48. astran+Nw[view] [source] [discussion] 2022-05-24 01:29:37
>>colinm+X6
> Yes actually, subconscious bias due to historical prejudice does have a large effect on society.

The evidence for implicit bias is pretty weak and IIRC is better explained by people having explicit bias but lying about it when asked.

(Note: this is even worse.)

◧◩◪
49. roboca+ax[view] [source] [discussion] 2022-05-24 01:33:37
>>ChadNa+5c
You really are not helping that cause.

As a foreigner[], your point confused me anyway, and doing a Google for cultural stuff usually gets variable results. But I did laugh at many of the comments here https://www.reddit.com/r/TooAfraidToAsk/comments/ufy2k4/why_...

[] probably, New Zealand, although foreigner is relative

replies(1): >>ChadNa+EI
◧◩◪
50. JohnBo+3F[view] [source] [discussion] 2022-05-24 02:56:17
>>nomel+v6
Pardon? The snake made of corn most certainly does not reflect reality: snakes made out of corn do not exist.
◧◩◪◨
51. ChadNa+EI[view] [source] [discussion] 2022-05-24 03:43:06
>>roboca+ax
Haha. I've got some personal experience with that one. I used to live in a house with many other people, and one girl was rastafarian and from jamacia and had dreadlocks, and another girl in the house (who wasn't black) thought that her hairstyle was very offensive. We had to have several conflict resolution meetings about it.

As silly as it seemed, I do think everyone is entitled to their own opinion and I respect the anti-dreadlocks girl for standing up for what she believed in even when most people were against her.

replies(1): >>roboca+tY3
◧◩
52. hda2+YL[view] [source] [discussion] 2022-05-24 04:19:49
>>barred+81
Google AI researchers don't have the final say in what gets published and what doesn't. I think there was a huge controversy when people learned about it last year.
◧◩◪
53. nearbu+aS[view] [source] [discussion] 2022-05-24 05:27:21
>>ChadNa+5c
That's exactly what's happening. Doing the search from the article of "unprofessional hair for work" brings up images with headlines like "It's ridiculous to say that black women's hair is unprofessional". (In addition to now bringing up images from that article itself and other similar articles comparing Google Images searches.)
replies(1): >>ceejay+vc2
◧◩◪◨
54. ceejay+vc2[view] [source] [discussion] 2022-05-24 15:33:00
>>nearbu+aS
You’re getting cause and effect backwards. The coverage of this changed the results, as did Google’s ensuing interventions.
replies(1): >>nearbu+yx3
◧◩◪◨⬒
55. nearbu+yx3[view] [source] [discussion] 2022-05-24 23:05:15
>>ceejay+vc2
I don't think so. You can set the search options to only find images published before the article, and even find some of the original images.

One image links to the 2015 article, "It's Ridiculous To Say Black Women's Natural Hair Is 'Unprofessional'!". The Guardian article on the Google results is from 2016.

Another image has the headline, "5 Reasons Natural Hair Should NOT be Viewed as Unprofessional - BGLH Marketplace" (2012).

Another: "What to Say When Someone Calls Your Hair Unprofessional".

Also, have you noticed how good and professional the black women in the Guardian's image search look? Most of them look like models with photos taken by professional photographers. Their hair is meticulously groomed and styled. This is not the type of photo an article would use to show "unprofessional hair". But it is the type of photo the above articles opted for.

◧◩◪◨⬒
56. roboca+tY3[view] [source] [discussion] 2022-05-25 03:22:57
>>ChadNa+EI
> thought that her hairstyle was very offensive

Telling others they don’t like how others look is right near the top on the scale of offensiveness. I had a partner who had had dreads for 25 years. I’m wasn’t a huge fan of her dreads because although I like the look, hers were somewhat annoying for me (scratchy, dread babies, me getting tangled). That said, I would hope I never tell any other person how to look. Hilarious when she was working, and someone would treat her badly due to their assumptions or prejudices, only to discover to their detriment she was very senior staff!

Dreadlocks are usually called dreads in NZ. My previous link mentions that some people call them locks, which seems inapproprate to me: kind of a confusing whitewashing denial of history.

◧◩◪
57. webmav+898[view] [source] [discussion] 2022-05-26 09:20:57
>>umeshu+66
> their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.

There is a difference between probably and invariably. Would it be so hard for the model to show male nurses at least some of the time?

[go to top]