zlacker

[parent] [thread] 2 comments
1. jvalen+(OP)[view] [source] 2022-05-23 22:02:52
You could simply encode a score for how well the output matches the input. If 25% of trees in summer are brown, perhaps the output should also have 25% brown. The model scores itself on frequencies as well as correctness.
replies(2): >>spywar+k2 >>astran+W2
2. spywar+k2[view] [source] 2022-05-23 22:15:06
>>jvalen+(OP)
Suppose 10% of people have green skin. And 90% of those people have broccoli hair. White people don't have broccoli hair.

What percent of people should be rendered as white people with broccoli hair? What if you request green people. Or broccoli haired people. Or white broccoli haired people? Or broccoli haired nazis?

It gets hard with these conditional probabilities

3. astran+W2[view] [source] 2022-05-23 22:18:25
>>jvalen+(OP)
The only reason these models work is that we don’t interfere with them like that.

Your description is closer to how the open source CLIP+GAN models did it - if you ask for “tree” it starts growing the picture towards treeness until it’s all averagely tree-y rather than being “a picture of a single tree”.

It would be nice if asking for N samples got a diversity of traits you didn’t explicitly ask for. OpenAI seems to solve this by not letting you see it generate humans at all…

[go to top]