zlacker

[parent] [thread] 14 comments
1. karpie+(OP)[view] [source] 2022-05-23 22:22:40
> At the end of a day, if you ask for a nurse, should the model output a male or female by default?

Randomly pick one.

> Trying to generate a model that's "free of correlative relationships" is impossible because the model would never have the infinitely pedantic input text to describe the exact output image.

Sure, and you can never make a medical procedure 100% safe. Doesn't mean that you don't try to make them safer. You can trim the obvious low hanging fruit though.

replies(3): >>calvin+n1 >>pxmpxm+g2 >>nmfish+QC5
2. calvin+n1[view] [source] 2022-05-23 22:30:48
>>karpie+(OP)
what if I asked the model to show me a sunday school photograph of baptists in the National Baptist Convention?
replies(1): >>rvnx+N4
3. pxmpxm+g2[view] [source] 2022-05-23 22:37:24
>>karpie+(OP)
> Randomly pick one.

How does the model back out the "certain people would like to pretend it's a fair coin toss that a randomly selected nurse is male or female" feature?

It won't be in any representative training set, so you're back to fishing for stock photos on getty rather than generating things.

replies(1): >>shadow+R6
◧◩
4. rvnx+N4[view] [source] [discussion] 2022-05-23 22:54:38
>>calvin+n1
The pictures I got from a similar model when asking for a "sunday school photograph of baptists in the National Baptist Convention": https://ibb.co/sHGZwh7
replies(1): >>calvin+95
◧◩◪
5. calvin+95[view] [source] [discussion] 2022-05-23 22:58:52
>>rvnx+N4
and how do we _feel_ about that outcome?
replies(1): >>andyba+B49
◧◩
6. shadow+R6[view] [source] [discussion] 2022-05-23 23:11:22
>>pxmpxm+g2
Yep, that's the hard problem Google is not comfortable releasing the API to this until they have it solved.
replies(1): >>zarzav+ya
◧◩◪
7. zarzav+ya[view] [source] [discussion] 2022-05-23 23:41:30
>>shadow+R6
But why is it a problem? The AI is just a mirror showing us ourselves. That’s a good thing. How does it help anyone to make an AI that presents a fake world so that we can pretend that we live in a world that we actually don’t? Disassociation from reality is more dangerous than bias.
replies(3): >>shadow+Fc >>astran+Hm >>Daishi+Rw
◧◩◪◨
8. shadow+Fc[view] [source] [discussion] 2022-05-23 23:59:02
>>zarzav+ya
> The AI is just a mirror showing us ourselves.

That's one hypothesis.

◧◩◪◨
9. astran+Hm[view] [source] [discussion] 2022-05-24 01:25:45
>>zarzav+ya
In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play" Minsky shut his eyes, "Why do you close your eyes?", Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened.

The AI doesn’t know what’s common or not. You don’t know if it’s going to be correct unless you’ve tested it. Just assuming whatever it comes out with is right is going to work as well as asking a psychic for your future.

replies(1): >>zarzav+KB
◧◩◪◨
10. Daishi+Rw[view] [source] [discussion] 2022-05-24 03:13:19
>>zarzav+ya
The AI is a mirror of the text and image corpora it was presented, as parsed and sanitized by the team in question.
◧◩◪◨⬒
11. zarzav+KB[view] [source] [discussion] 2022-05-24 04:14:16
>>astran+Hm
The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

If they were to weight the training data so that there were an equal number of male and female nurses, then it may well produce male and female nurses with equal probability, but it would also learn an incorrect understanding of the world.

That is quite distinct from weighting the data so that it has a greater correspondence to reality. For example, if Africa is not represented well then weighting training data from Africa more strongly is justifiable.

The point is, it’s not a good thing for us to intentionally teach AIs a world that is idealized and false.

As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections, lest we start to disassociate from reality.

Chinese media (or insert your favorite unfree regime) also presents China as a utopia.

replies(2): >>astran+uC >>shadow+Yl1
◧◩◪◨⬒⬓
12. astran+uC[view] [source] [discussion] 2022-05-24 04:22:33
>>zarzav+KB
> The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

No it is not, because you don’t know if it’s been shown each one of its samples the same number of times, or if it overweighted some of its samples more than others. There’s normal reasons both of these would happen.

◧◩◪◨⬒⬓
13. shadow+Yl1[view] [source] [discussion] 2022-05-24 11:45:08
>>zarzav+KB
> As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections...

Is it? I'm reminded of the Microsoft Tay experiment, were they attempted to train an AI by letting Twitter users interact with it.

The result was a non-viable mess that nobody liked.

14. nmfish+QC5[view] [source] 2022-05-25 16:56:52
>>karpie+(OP)
What about preschool teacher?

I say this because I’ve been visiting a number of childcare centres over the past few days and I still have yet to see a single male teacher.

◧◩◪◨
15. andyba+B49[view] [source] [discussion] 2022-05-26 16:38:46
>>calvin+95
It's gone now. What was it?
[go to top]