zlacker

[parent] [thread] 6 comments
1. jdashg+(OP)[view] [source] 2022-05-23 21:53:42
Additionally, if you optimize for most-likely-as-best, you will end up with the stereotypical result 100% of the time, instead of in proportional frequency to the statistics.

Put another way, when we ask for an output optimized for "nursiness", is that not a request for some ur stereotypical nurse?

replies(2): >>jvalen+F1 >>ar_lan+C5
2. jvalen+F1[view] [source] 2022-05-23 22:02:52
>>jdashg+(OP)
You could simply encode a score for how well the output matches the input. If 25% of trees in summer are brown, perhaps the output should also have 25% brown. The model scores itself on frequencies as well as correctness.
replies(2): >>spywar+Z3 >>astran+B4
◧◩
3. spywar+Z3[view] [source] [discussion] 2022-05-23 22:15:06
>>jvalen+F1
Suppose 10% of people have green skin. And 90% of those people have broccoli hair. White people don't have broccoli hair.

What percent of people should be rendered as white people with broccoli hair? What if you request green people. Or broccoli haired people. Or white broccoli haired people? Or broccoli haired nazis?

It gets hard with these conditional probabilities

◧◩
4. astran+B4[view] [source] [discussion] 2022-05-23 22:18:25
>>jvalen+F1
The only reason these models work is that we don’t interfere with them like that.

Your description is closer to how the open source CLIP+GAN models did it - if you ask for “tree” it starts growing the picture towards treeness until it’s all averagely tree-y rather than being “a picture of a single tree”.

It would be nice if asking for N samples got a diversity of traits you didn’t explicitly ask for. OpenAI seems to solve this by not letting you see it generate humans at all…

5. ar_lan+C5[view] [source] 2022-05-23 22:25:17
>>jdashg+(OP)
You could stipulate that it roll a die based on percentage results - if 70% of Americans are "white", then 70% of the time show a white person - 13% of the time the result should be black, etc.

That's excessively simplified but wouldn't this drop the stereotype and better reflect reality?

replies(2): >>ghayes+L6 >>SnowHi+37
◧◩
6. ghayes+L6[view] [source] [discussion] 2022-05-23 22:32:31
>>ar_lan+C5
Is this going to be hand-rolled? Do you change the prompt you pass to the network to reflect the desired outcomes?
◧◩
7. SnowHi+37[view] [source] [discussion] 2022-05-23 22:34:32
>>ar_lan+C5
No, because a user will see a particular image not the statistically ensemble. It will at times show an Eskimo without a hand because they do statistically exist. But the user definitely does not want that.
[go to top]