zlacker

[parent] [thread] 14 comments
1. tines+(OP)[view] [source] 2022-05-23 22:38:16
Not the person you responded to, but I do see how someone could be hurt by that, and I want to avoid hurting people. But is this the level at which we should do it? Could skewing search results, i.e. hiding the bias of the real world, give us the impression that everything is fine and we don't need to do anything to actually help people?

I have a feeling that we need to be real with ourselves and solve problems and not paper over them. I feel like people generally expect search engines to tell them what's really there instead of what people wish were there. And if the engines do that, people can get agitated!

I'd almost say that hurt feelings are prerequisite for real change, hard though that may be.

These are all really interesting questions brought up by this technology, thanks for your thoughts. Disclaimer, I'm a fucking idiot with no idea what I'm talking about.

replies(2): >>magica+n3 >>slg+p4
2. magica+n3[view] [source] 2022-05-23 23:03:41
>>tines+(OP)
> Could skewing search results, i.e. hiding the bias of the real world

Which real world? The population you sample from is going to make a big difference. Do you expect it to reflect your day to day life in your own city? Own country? The entire world? Results will vary significantly.

replies(2): >>sangno+K3 >>tines+e4
◧◩
3. sangno+K3[view] [source] [discussion] 2022-05-23 23:06:41
>>magica+n3
For AI, "real world" is likely "the world, as seen by Silicon Valley."
◧◩
4. tines+e4[view] [source] [discussion] 2022-05-23 23:09:39
>>magica+n3
I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

If I ask for pictures of Japanese people, I'm not shocked when all the results are of Japanese people. If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased but because the real world is biased and we should do something about that. The difference is that I know what set I'm asking for a sample from, and I can react accordingly.

replies(3): >>magica+78 >>nyolfe+u9 >>jfoste+RX
5. slg+p4[view] [source] 2022-05-23 23:10:32
>>tines+(OP)
>Could skewing search results, i.e. hiding the bias of the real world

Your logic seems to rest on this assumption which I don't think is justified. "Skewing search results" is not the same as "hiding the biases of the real world". Showing the most statistically likely result is not the same as showing the world how it truly is.

A generic nurse is statistically going to be female most of the time. However, a model that returns every nurse as female is not showing the real world as it is. It is exaggerating and reinforcing the bias of the real world. It inherently requires a more advanced model to actually represent the real world. I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

replies(1): >>roboca+ef
◧◩◪
6. magica+78[view] [source] [discussion] 2022-05-23 23:40:56
>>tines+e4
> If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased

Well the results would unquestionably be biased. All results being black people wouldn't reflect reality at all, and hurting feelings to enact change seems like a poor justification for incorrect results.

> I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

Ok, and let's say I ask for "criminals in Cheyenne Wyoming" and it doesn't know the answer to that, should it just do its best to answer? Seem risky if people are going to get fired up about it and act on this to get "real change".

That seems like a good parallel to what we're talking about here, since it's very unlikely that crime statistics were fed into this image generating model.

◧◩◪
7. nyolfe+u9[view] [source] [discussion] 2022-05-23 23:53:02
>>tines+e4
> If I asked for "criminals in the United States" and all the results are black people,

curiously, this search actually only returns white people for me on GIS

◧◩
8. roboca+ef[view] [source] [discussion] 2022-05-24 00:40:43
>>slg+p4
> I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

Every model will have some random biases. Some of those random biases will undesirably exaggerate the real world. Every model will undesirably exaggerate something. Therefore no model should be shared.

Your goal is nice, but impractical?

replies(2): >>slg+Ai >>barney+7W
◧◩◪
9. slg+Ai[view] [source] [discussion] 2022-05-24 01:11:18
>>roboca+ef
Fittingly, your comment fails into the same criticism I had of the model. It shows a refusal/inability to engage with the full complexities of the situation.

I said "It is reasonable... to avoid sharing models". That is an acknowledged that the creators are acting reasonably. It does not imply anything as extreme as "no model should be shared". The only way to get from A to B there is for you to assume that I think there is only one reasonable response and every other possible reaction is unreasonable. Doesn't that seem like a silly assumption?

replies(1): >>roboca+pT
◧◩◪◨
10. roboca+pT[view] [source] [discussion] 2022-05-24 07:40:59
>>slg+Ai

  “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

  ’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

  ’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”
◧◩◪
11. barney+7W[view] [source] [discussion] 2022-05-24 08:09:53
>>roboca+ef
> Your goal is nice, but impractical?

If the only way to do AI is to encode racism etc, then we shouldn't be doing AI at all.

◧◩◪
12. jfoste+RX[view] [source] [discussion] 2022-05-24 08:23:42
>>tines+e4
In a way, if the model brings back an image for "criminals in the United States" that isn't based on the statistical reality, isn't it essentially complicit in sweeping a major social issue under the rug?

We may not like what it shows us, but blindfolding ourselves is not the solution to that problem.

replies(1): >>webmav+BQ7
◧◩◪◨
13. webmav+BQ7[view] [source] [discussion] 2022-05-26 08:04:34
>>jfoste+RX
At the very least we should expect that the results not be more biased than reality. Not all criminals are Black. Not all are men. Not all are poor. If the model (which is stochastic) only outputs poor Black men, rather than a distribution that is closer to reality, it is exhibiting bias and it is fair to ask why the data it picked that bias up from is not reflective of reality.
replies(1): >>jfoste+PR7
◧◩◪◨⬒
14. jfoste+PR7[view] [source] [discussion] 2022-05-26 08:18:15
>>webmav+BQ7
Yeah, it makes sense for the results to simply reflect reality as closely as possible. No bias in any direction is desirable.
replies(1): >>webmav+QRa
◧◩◪◨⬒⬓
15. webmav+QRa[view] [source] [discussion] 2022-05-27 09:05:19
>>jfoste+PR7
Sarcasm, eh? At least there's no way THAT could be taken the wrong way.
[go to top]