zlacker

[parent] [thread] 4 comments
1. Ar-Cur+(OP)[view] [source] 2022-05-23 21:44:16
Except "reality" in this case is just their biased training set. E.g. There's more non-white doctors and nurses in the world than white ones, yet their model would likely show an image of white person when you type in "doctor".
replies(1): >>umeshu+f3
2. umeshu+f3[view] [source] 2022-05-23 22:02:06
>>Ar-Cur+(OP)
Alternately, there are more females nurses in the world than male nurses, and their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.
replies(3): >>contin+26 >>astran+p7 >>webmav+h68
◧◩
3. contin+26[view] [source] [discussion] 2022-05-23 22:16:57
>>umeshu+f3
@Google Brain Toronto Team: See what you get when you generate nurses with ncurses.
◧◩
4. astran+p7[view] [source] [discussion] 2022-05-23 22:26:27
>>umeshu+f3
Google Image Search doesn’t reflect harsh reality when you search for things; it shows you what’s on Pinterest. The same is more likely to apply here than the idea they’re trying to hide something.

There’s no reason to believe their model training learns the same statistics as their input dataset even. If that’s not an explicit training goal then whatever happens happens. AI isn’t magic or more correct than people.

◧◩
5. webmav+h68[view] [source] [discussion] 2022-05-26 09:20:57
>>umeshu+f3
> their model probably shows an image of a woman when you type in "nurse" but they consider that a problem.

There is a difference between probably and invariably. Would it be so hard for the model to show male nurses at least some of the time?

[go to top]