zlacker

[parent] [thread] 69 comments
1. karpie+(OP)[view] [source] 2022-05-23 21:43:40
It depends on whether you'd like the model to learn casual or correlative relationships.

If you want the model to understand what a "nurse" actually is, then it shouldn't be associated with female.

If you want the model to understand how the word "nurse" is usually used, without regard for what a "nurse" actually is, then associating it with female is fine.

The issue with a correlative model is that it can easily be self-reinforcing.

replies(5): >>jdashg+N1 >>Ludwig+84 >>bufbup+E5 >>sineno+Os >>drdeca+dn1
2. jdashg+N1[view] [source] 2022-05-23 21:53:42
>>karpie+(OP)
Additionally, if you optimize for most-likely-as-best, you will end up with the stereotypical result 100% of the time, instead of in proportional frequency to the statistics.

Put another way, when we ask for an output optimized for "nursiness", is that not a request for some ur stereotypical nurse?

replies(2): >>jvalen+s3 >>ar_lan+p7
◧◩
3. jvalen+s3[view] [source] [discussion] 2022-05-23 22:02:52
>>jdashg+N1
You could simply encode a score for how well the output matches the input. If 25% of trees in summer are brown, perhaps the output should also have 25% brown. The model scores itself on frequencies as well as correctness.
replies(2): >>spywar+M5 >>astran+o6
4. Ludwig+84[view] [source] 2022-05-23 22:06:26
>>karpie+(OP)
> If you want the model to understand how the word "nurse" is usually used, without regard for what a "nurse" actually is, then associating it with female is fine.

That’s a distinction without a difference. Meaning is use.

replies(2): >>tines+75 >>mdp202+F6
◧◩
5. tines+75[view] [source] [discussion] 2022-05-23 22:11:16
>>Ludwig+84
Not really; the gender of a nurse is accidental, other properties are essential.
replies(3): >>codeth+18 >>paisaw+2b >>Ludwig+rd
6. bufbup+E5[view] [source] 2022-05-23 22:14:32
>>karpie+(OP)
At the end of a day, if you ask for a nurse, should the model output a male or female by default? If the input text lacks context/nuance, then the model must have some bias to infer the user's intent. This holds true for any image it generates; not just the politically sensitive ones. For example, if I ask for a picture of a person, and don't get one with pink hair, is that a shortcoming of the model?

I'd say that bias is only an issue if it's unable to respond to additional nuance in the input text. For example, if I ask for a "male nurse" it should be able to generate the less likely combination. Same with other races, hair colors, etc... Trying to generate a model that's "free of correlative relationships" is impossible because the model would never have the infinitely pedantic input text to describe the exact output image.

replies(5): >>karpie+V6 >>slg+G7 >>sangno+Nc >>pshc+Wg >>webmav+MY7
◧◩◪
7. spywar+M5[view] [source] [discussion] 2022-05-23 22:15:06
>>jvalen+s3
Suppose 10% of people have green skin. And 90% of those people have broccoli hair. White people don't have broccoli hair.

What percent of people should be rendered as white people with broccoli hair? What if you request green people. Or broccoli haired people. Or white broccoli haired people? Or broccoli haired nazis?

It gets hard with these conditional probabilities

◧◩◪
8. astran+o6[view] [source] [discussion] 2022-05-23 22:18:25
>>jvalen+s3
The only reason these models work is that we don’t interfere with them like that.

Your description is closer to how the open source CLIP+GAN models did it - if you ask for “tree” it starts growing the picture towards treeness until it’s all averagely tree-y rather than being “a picture of a single tree”.

It would be nice if asking for N samples got a diversity of traits you didn’t explicitly ask for. OpenAI seems to solve this by not letting you see it generate humans at all…

◧◩
9. mdp202+F6[view] [source] [discussion] 2022-05-23 22:20:38
>>Ludwig+84
Very certainly not, since use is individual and thus a function of competence. So, adherence to meaning depends on the user. Conflict resolution?

And anyway - contextually -, the representational natures of "use" (instances) and that of "meaning" (definition) are completely different.

replies(2): >>layer8+B9 >>Ludwig+Gd
◧◩
10. karpie+V6[view] [source] [discussion] 2022-05-23 22:22:40
>>bufbup+E5
> At the end of a day, if you ask for a nurse, should the model output a male or female by default?

Randomly pick one.

> Trying to generate a model that's "free of correlative relationships" is impossible because the model would never have the infinitely pedantic input text to describe the exact output image.

Sure, and you can never make a medical procedure 100% safe. Doesn't mean that you don't try to make them safer. You can trim the obvious low hanging fruit though.

replies(3): >>calvin+i8 >>pxmpxm+b9 >>nmfish+LJ5
◧◩
11. ar_lan+p7[view] [source] [discussion] 2022-05-23 22:25:17
>>jdashg+N1
You could stipulate that it roll a die based on percentage results - if 70% of Americans are "white", then 70% of the time show a white person - 13% of the time the result should be black, etc.

That's excessively simplified but wouldn't this drop the stereotype and better reflect reality?

replies(2): >>ghayes+y8 >>SnowHi+Q8
◧◩
12. slg+G7[view] [source] [discussion] 2022-05-23 22:27:22
>>bufbup+E5
This type of bias sounds a lot easier to explain away as a non-issue when we are using "nurse" as the hypothetical prompt. What if the prompt is "criminal", "rapist", or some other negative? Would that change your thought process or would you be okay with the system always returning a person of the same race and gender that statistics indicate is the most likely? Do you see how that could be a problem?
replies(3): >>tines+i9 >>true_r+Bb >>rpmism+qd
◧◩◪
13. codeth+18[view] [source] [discussion] 2022-05-23 22:29:25
>>tines+75
While not essential, I wouldn't exactly call the gender "accidental":

> We investigated sex differences in 473,260 adolescents’ aspirations to work in things-oriented (e.g., mechanic), people-oriented (e.g., nurse), and STEM (e.g., mathematician) careers across 80 countries and economic regions using the 2018 Programme for International Student Assessment (PISA). We analyzed student career aspirations in combination with student achievement in mathematics, reading, and science, as well as parental occupations and family wealth. In each country and region, more boys than girls aspired to a things-oriented or STEM occupation and more girls than boys to a people-oriented occupation. These sex differences were larger in countries with a higher level of women's empowerment. We explain this counter-intuitive finding through the indirect effect of wealth. Women's empowerment is associated with relatively high levels of national wealth and this wealth allows more students to aspire to occupations they are intrinsically interested in.

Source: https://psyarxiv.com/zhvre/ (HN discussion: https://news.ycombinator.com/item?id=29040132)

replies(2): >>daenz+Za >>astran+Mc
◧◩◪
14. calvin+i8[view] [source] [discussion] 2022-05-23 22:30:48
>>karpie+V6
what if I asked the model to show me a sunday school photograph of baptists in the National Baptist Convention?
replies(1): >>rvnx+Ib
◧◩◪
15. ghayes+y8[view] [source] [discussion] 2022-05-23 22:32:31
>>ar_lan+p7
Is this going to be hand-rolled? Do you change the prompt you pass to the network to reflect the desired outcomes?
◧◩◪
16. SnowHi+Q8[view] [source] [discussion] 2022-05-23 22:34:32
>>ar_lan+p7
No, because a user will see a particular image not the statistically ensemble. It will at times show an Eskimo without a hand because they do statistically exist. But the user definitely does not want that.
◧◩◪
17. pxmpxm+b9[view] [source] [discussion] 2022-05-23 22:37:24
>>karpie+V6
> Randomly pick one.

How does the model back out the "certain people would like to pretend it's a fair coin toss that a randomly selected nurse is male or female" feature?

It won't be in any representative training set, so you're back to fishing for stock photos on getty rather than generating things.

replies(1): >>shadow+Md
◧◩◪
18. tines+i9[view] [source] [discussion] 2022-05-23 22:38:16
>>slg+G7
Not the person you responded to, but I do see how someone could be hurt by that, and I want to avoid hurting people. But is this the level at which we should do it? Could skewing search results, i.e. hiding the bias of the real world, give us the impression that everything is fine and we don't need to do anything to actually help people?

I have a feeling that we need to be real with ourselves and solve problems and not paper over them. I feel like people generally expect search engines to tell them what's really there instead of what people wish were there. And if the engines do that, people can get agitated!

I'd almost say that hurt feelings are prerequisite for real change, hard though that may be.

These are all really interesting questions brought up by this technology, thanks for your thoughts. Disclaimer, I'm a fucking idiot with no idea what I'm talking about.

replies(2): >>magica+Fc >>slg+Hd
◧◩◪
19. layer8+B9[view] [source] [discussion] 2022-05-23 22:39:43
>>mdp202+F6
Humans overwhelmingly learn meaning by use, not by definition.
replies(1): >>mdp202+2a
◧◩◪◨
20. mdp202+2a[view] [source] [discussion] 2022-05-23 22:42:22
>>layer8+B9
> Humans overwhelmingly learn meaning by use, not by definition

Preliminarily and provisionally. Then, they start discussing their concepts - it is the very definition of Intelligence.

replies(1): >>layer8+tc
◧◩◪◨
21. daenz+Za[view] [source] [discussion] 2022-05-23 22:48:43
>>codeth+18
The "Gender Equality Paradox"... there's a fascinating episode[0] about it. It's incredible how unscientific and ideologically-motivated one side comes off in it.

0. https://www.youtube.com/watch?v=_XsEsTvfT-M

◧◩◪
22. paisaw+2b[view] [source] [discussion] 2022-05-23 22:48:58
>>tines+75
How do you know this? Because you can, in your mind, divide the function of a nurse from the statistical reality of nursing?

Are the logical divisions you make in your mind really indicative of anything other than your arbitrary personal preferences?

replies(1): >>tines+hc
◧◩◪
23. true_r+Bb[view] [source] [discussion] 2022-05-23 22:53:57
>>slg+G7
Cultural biases aren’t uniform across nations. If a prompt returns caucasians for nurses, and other races for criminals then most people in my country would not note that as racism simply because there are not, and there have never in history, been enough caucasians resident for anyone to create significant race theories about them.

This is a far cry from say the USA where that would instantly trigger a response since until the 1960s there was a widespread race based segregation.

◧◩◪◨
24. rvnx+Ib[view] [source] [discussion] 2022-05-23 22:54:38
>>calvin+i8
The pictures I got from a similar model when asking for a "sunday school photograph of baptists in the National Baptist Convention": https://ibb.co/sHGZwh7
replies(1): >>calvin+4c
◧◩◪◨⬒
25. calvin+4c[view] [source] [discussion] 2022-05-23 22:58:52
>>rvnx+Ib
and how do we _feel_ about that outcome?
replies(1): >>andyba+wb9
◧◩◪◨
26. tines+hc[view] [source] [discussion] 2022-05-23 22:59:55
>>paisaw+2b
No, because there's at least one male nurse.
replies(1): >>paisaw+Ae
◧◩◪◨⬒
27. layer8+tc[view] [source] [discussion] 2022-05-23 23:01:22
>>mdp202+2a
Most humans don’t do that for most things they have a notion of in their head. It would be much too time consuming to start discussing the meaning of even just a significant fraction of them. For a rough reference point, the English language has over 150.000 words that you could each discuss the meaning of and try to come up with a definition. Not to speak of the difficulties to make that set of definitions noncircular.
replies(1): >>mdp202+qn1
◧◩◪◨
28. magica+Fc[view] [source] [discussion] 2022-05-23 23:03:41
>>tines+i9
> Could skewing search results, i.e. hiding the bias of the real world

Which real world? The population you sample from is going to make a big difference. Do you expect it to reflect your day to day life in your own city? Own country? The entire world? Results will vary significantly.

replies(2): >>sangno+2d >>tines+wd
◧◩◪◨
29. astran+Mc[view] [source] [discussion] 2022-05-23 23:04:33
>>codeth+18
If you ask it to generate “nurse” surely the problem isn’t that it’s going to just generate women, it’s that it’s going to give you women in those Halloween sexy nurse costumes.

If it did, would you believe that’s a real representative nurse because an image model gave it to you?

◧◩
30. sangno+Nc[view] [source] [discussion] 2022-05-23 23:04:46
>>bufbup+E5
> At the end of a day, if you ask for a nurse, should the model output a male or female by default?

This depends on the application. As an example, it would be a problem if it's used as a CV-screening app that's implicitly down-ranking male-applicants to nurse positions, resulting in fewer interviews for them.

◧◩◪◨⬒
31. sangno+2d[view] [source] [discussion] 2022-05-23 23:06:41
>>magica+Fc
For AI, "real world" is likely "the world, as seen by Silicon Valley."
◧◩◪
32. rpmism+qd[view] [source] [discussion] 2022-05-23 23:09:18
>>slg+G7
It's an unfortunate reflection of reality. There are three possible outcomes:

1. The model provides a reflection of reality, as politically inconvenient and hurtful as it may be.

2. The model provides an intentionally obfuscated version with either random traits or non correlative traits.

3. The model refuses to answer.

Which of these is ideal to you?

replies(1): >>slg+5f
◧◩◪
33. Ludwig+rd[view] [source] [discussion] 2022-05-23 23:09:19
>>tines+75
Not really what? How does that contradict what I've said?
◧◩◪◨⬒
34. tines+wd[view] [source] [discussion] 2022-05-23 23:09:39
>>magica+Fc
I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

If I ask for pictures of Japanese people, I'm not shocked when all the results are of Japanese people. If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased but because the real world is biased and we should do something about that. The difference is that I know what set I'm asking for a sample from, and I can react accordingly.

replies(3): >>magica+ph >>nyolfe+Mi >>jfoste+971
◧◩◪
35. Ludwig+Gd[view] [source] [discussion] 2022-05-23 23:10:31
>>mdp202+F6
Definition is an entirely artificial construct and doesn't equate to meaning. Definition depends on other words that you also have to understand.
replies(1): >>mdp202+hb1
◧◩◪◨
36. slg+Hd[view] [source] [discussion] 2022-05-23 23:10:32
>>tines+i9
>Could skewing search results, i.e. hiding the bias of the real world

Your logic seems to rest on this assumption which I don't think is justified. "Skewing search results" is not the same as "hiding the biases of the real world". Showing the most statistically likely result is not the same as showing the world how it truly is.

A generic nurse is statistically going to be female most of the time. However, a model that returns every nurse as female is not showing the real world as it is. It is exaggerating and reinforcing the bias of the real world. It inherently requires a more advanced model to actually represent the real world. I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

replies(1): >>roboca+wo
◧◩◪◨
37. shadow+Md[view] [source] [discussion] 2022-05-23 23:11:22
>>pxmpxm+b9
Yep, that's the hard problem Google is not comfortable releasing the API to this until they have it solved.
replies(1): >>zarzav+th
◧◩◪◨⬒
38. paisaw+Ae[view] [source] [discussion] 2022-05-23 23:16:48
>>tines+hc
Please don't waste time with this kind of obtuse response. This fact says nothing about why nursing is a female-dominated career. You claim to know that this is just an accidental fact of history or society -- how do you know that?
replies(1): >>tines+Jg
◧◩◪◨
39. slg+5f[view] [source] [discussion] 2022-05-23 23:20:53
>>rpmism+qd
What makes you think those are the only options? Why can't we have an option that the model returns a range of different outputs based off a prompt?

A model that returns 100% of nurses as female might be statistically more accurate than a model that returns 50% of nurses as female, but it is still not an accurate reflection of the real world. I agree that the model shouldn't return a male nurse 50% of the time. Yet an accurate model needs to be able to occasionally return a male nurse without being directly prompted for a "male nurse". Anything else would also be inaccurate.

replies(1): >>rpmism+gf
◧◩◪◨⬒
40. rpmism+gf[view] [source] [discussion] 2022-05-23 23:22:06
>>slg+5f
So, the model should have a knowledge of political correctness, and return multiple results if the first choice might reinforce a stereotype?
replies(1): >>slg+6g
◧◩◪◨⬒⬓
41. slg+6g[view] [source] [discussion] 2022-05-23 23:29:24
>>rpmism+gf
I never said anything about political correctness. You implied that you want a model that "provides a reflection of reality". All nurses being female is not "a reflection of reality". It is a distortion of reality because the model doesn't actually understand gender or nurses.
replies(1): >>rpmism+LI
◧◩◪◨⬒⬓
42. tines+Jg[view] [source] [discussion] 2022-05-23 23:35:37
>>paisaw+Ae
I meant "accidental" in the Aristotelian sense: https://plato.stanford.edu/entries/essential-accidental/
replies(1): >>paisaw+dt
◧◩
43. pshc+Wg[view] [source] [discussion] 2022-05-23 23:37:19
>>bufbup+E5
Perhaps to avoid this issue, future versions of the model would throw an error like “bias leak: please specify a gender for the nurse at character 32”
◧◩◪◨⬒⬓
44. magica+ph[view] [source] [discussion] 2022-05-23 23:40:56
>>tines+wd
> If I asked for "criminals in the United States" and all the results are black people, that should concern me, not because the data set is biased

Well the results would unquestionably be biased. All results being black people wouldn't reflect reality at all, and hurting feelings to enact change seems like a poor justification for incorrect results.

> I'd say it doesn't actually matter, as long as the population sampled is made clear to the user.

Ok, and let's say I ask for "criminals in Cheyenne Wyoming" and it doesn't know the answer to that, should it just do its best to answer? Seem risky if people are going to get fired up about it and act on this to get "real change".

That seems like a good parallel to what we're talking about here, since it's very unlikely that crime statistics were fed into this image generating model.

◧◩◪◨⬒
45. zarzav+th[view] [source] [discussion] 2022-05-23 23:41:30
>>shadow+Md
But why is it a problem? The AI is just a mirror showing us ourselves. That’s a good thing. How does it help anyone to make an AI that presents a fake world so that we can pretend that we live in a world that we actually don’t? Disassociation from reality is more dangerous than bias.
replies(3): >>shadow+Aj >>astran+Ct >>Daishi+MD
◧◩◪◨⬒⬓
46. nyolfe+Mi[view] [source] [discussion] 2022-05-23 23:53:02
>>tines+wd
> If I asked for "criminals in the United States" and all the results are black people,

curiously, this search actually only returns white people for me on GIS

◧◩◪◨⬒⬓
47. shadow+Aj[view] [source] [discussion] 2022-05-23 23:59:02
>>zarzav+th
> The AI is just a mirror showing us ourselves.

That's one hypothesis.

◧◩◪◨⬒
48. roboca+wo[view] [source] [discussion] 2022-05-24 00:40:43
>>slg+Hd
> I think it is reasonable for the creators to avoid sharing models known to not be smart enough to avoid exaggerating real world biases.

Every model will have some random biases. Some of those random biases will undesirably exaggerate the real world. Every model will undesirably exaggerate something. Therefore no model should be shared.

Your goal is nice, but impractical?

replies(2): >>slg+Sr >>barney+p51
◧◩◪◨⬒⬓
49. slg+Sr[view] [source] [discussion] 2022-05-24 01:11:18
>>roboca+wo
Fittingly, your comment fails into the same criticism I had of the model. It shows a refusal/inability to engage with the full complexities of the situation.

I said "It is reasonable... to avoid sharing models". That is an acknowledged that the creators are acting reasonably. It does not imply anything as extreme as "no model should be shared". The only way to get from A to B there is for you to assume that I think there is only one reasonable response and every other possible reaction is unreasonable. Doesn't that seem like a silly assumption?

replies(1): >>roboca+H21
50. sineno+Os[view] [source] 2022-05-24 01:19:28
>>karpie+(OP)
> It depends on whether you'd like the model to learn casual or correlative relationships.

I expect that in the practical limit of scale achievable, the regularization pressure inherent to the process of training these models converges to https://en.wikipedia.org/wiki/Minimum_description_length and the correlative relationships become optimized away, leaving mostly true causal relationships inherent to data-generating process.

◧◩◪◨⬒⬓⬔
51. paisaw+dt[view] [source] [discussion] 2022-05-24 01:22:31
>>tines+Jg
Yes I understand that. That is only a description of what mental arithmetic you can do if you define your terms arbitrarily conveniently.

"It is possible for a man to provide care" is not the same statement as "it is possible for a sexually dimorphic species in a competitive, capitalistic society (...add more qualifications here) to develop a male-dominated caretaking role"

You're just asserting that you could imagine male nurses without creating a logical contradiction, unlike e.g. circles that have corners. That doesn't mean nursing could be a male-dominated industry under current constraints.

◧◩◪◨⬒⬓
52. astran+Ct[view] [source] [discussion] 2022-05-24 01:25:45
>>zarzav+th
In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play" Minsky shut his eyes, "Why do you close your eyes?", Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened.

The AI doesn’t know what’s common or not. You don’t know if it’s going to be correct unless you’ve tested it. Just assuming whatever it comes out with is right is going to work as well as asking a psychic for your future.

replies(1): >>zarzav+FI
◧◩◪◨⬒⬓
53. Daishi+MD[view] [source] [discussion] 2022-05-24 03:13:19
>>zarzav+th
The AI is a mirror of the text and image corpora it was presented, as parsed and sanitized by the team in question.
◧◩◪◨⬒⬓⬔
54. zarzav+FI[view] [source] [discussion] 2022-05-24 04:14:16
>>astran+Ct
The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

If they were to weight the training data so that there were an equal number of male and female nurses, then it may well produce male and female nurses with equal probability, but it would also learn an incorrect understanding of the world.

That is quite distinct from weighting the data so that it has a greater correspondence to reality. For example, if Africa is not represented well then weighting training data from Africa more strongly is justifiable.

The point is, it’s not a good thing for us to intentionally teach AIs a world that is idealized and false.

As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections, lest we start to disassociate from reality.

Chinese media (or insert your favorite unfree regime) also presents China as a utopia.

replies(2): >>astran+pJ >>shadow+Ts1
◧◩◪◨⬒⬓⬔
55. rpmism+LI[view] [source] [discussion] 2022-05-24 04:14:57
>>slg+6g
A majority of nurses are women, therefore a woman would be a reasonable representation of a nurse. Obviously that's not a helpful stereotype, because male nurses exist and face challenges due to not fitting the stereotypes. The model is dumb, and outputs what it's seen. Is that wrong?
replies(1): >>webmav+n18
◧◩◪◨⬒⬓⬔⧯
56. astran+pJ[view] [source] [discussion] 2022-05-24 04:22:33
>>zarzav+FI
> The model makes inferences about the world from training data. When it sees more female nurses than male nurses in its training set, if infers that most nurses are female. This is a correct inference.

No it is not, because you don’t know if it’s been shown each one of its samples the same number of times, or if it overweighted some of its samples more than others. There’s normal reasons both of these would happen.

◧◩◪◨⬒⬓⬔
57. roboca+H21[view] [source] [discussion] 2022-05-24 07:40:59
>>slg+Sr

  “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

  ’The question is,’ said Alice, ‘whether you can make words mean so many different things.’

  ’The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.”
◧◩◪◨⬒⬓
58. barney+p51[view] [source] [discussion] 2022-05-24 08:09:53
>>roboca+wo
> Your goal is nice, but impractical?

If the only way to do AI is to encode racism etc, then we shouldn't be doing AI at all.

◧◩◪◨⬒⬓
59. jfoste+971[view] [source] [discussion] 2022-05-24 08:23:42
>>tines+wd
In a way, if the model brings back an image for "criminals in the United States" that isn't based on the statistical reality, isn't it essentially complicit in sweeping a major social issue under the rug?

We may not like what it shows us, but blindfolding ourselves is not the solution to that problem.

replies(1): >>webmav+TZ7
◧◩◪◨
60. mdp202+hb1[view] [source] [discussion] 2022-05-24 09:05:40
>>Ludwig+Gd
You are thinking of the literal definition - that "made of literal letters".

Mental definition is that "«artificial»" (out of the internal processing) construct made of relations that reconstructs a meaning. Such ontology is logical - "this is that". (It would not be made of memories, which are processed, deconstructed.)

Concepts are internally refined: their "implicit" definition (a posterior reading of the corresponding mental low-level) is refined.

61. drdeca+dn1[view] [source] 2022-05-24 10:58:41
>>karpie+(OP)
The meaning of the word "nurse" is determined by how the word "nurse" is used and understood.

Perhaps what "nurse" means isn't what "nurse" should mean, but what people mean when they say "nurse" is what "nurse" means.

◧◩◪◨⬒⬓
62. mdp202+qn1[view] [source] [discussion] 2022-05-24 10:59:55
>>layer8+tc
(Mental entities are very many more than the hundred thousand, out of composition, cartesianity etc. So-called "protocols" (after logical positivism) are part of them, relating more entities with space and time. Also, by speaking of "circular definitions" you are, like others, confusing mental definitions with formal definitions.)

So? Draw your consequences.

Following what was said, you are stating that "a staggering large number of people are unintelligent". Well, ok, that was noted. Scolio: if unintelligent, they should refrain from expressing judgement (you are really stating their non-judgement), why all the actual expression? If unintelligent actors, they are liabilities, why this overwhelming employment in the job market?

Thing is, as unintelligent as you depict them quantitatively, the internal processing that constitutes intelligence proceeds in many even when scarce, even when choked by some counterproductive bad formation - processing is the natural functioning. And then, the right Paretian side will "do the job" that the vast remainder will not do, and process notions actively (more, "encouragingly" - the process is importantly unconscious, many low-level layers are) and proficiently.

And the very Paretian prospect will reveal, there will be a number of shallow takes, largely shared, on some idea, and other intensively more refined takes, more rare, on the same idea. That shows you a distinction between "use" and the asymptotic approximation to meanings as achieved by intellectual application.

◧◩◪◨⬒⬓⬔⧯
63. shadow+Ts1[view] [source] [discussion] 2022-05-24 11:45:08
>>zarzav+FI
> As these AIs work their way into our lives it is essential that they reproduce the world in all of its grit and imperfections...

Is it? I'm reminded of the Microsoft Tay experiment, were they attempted to train an AI by letting Twitter users interact with it.

The result was a non-viable mess that nobody liked.

◧◩◪
64. nmfish+LJ5[view] [source] [discussion] 2022-05-25 16:56:52
>>karpie+V6
What about preschool teacher?

I say this because I’ve been visiting a number of childcare centres over the past few days and I still have yet to see a single male teacher.

◧◩
65. webmav+MY7[view] [source] [discussion] 2022-05-26 07:53:26
>>bufbup+E5
> If the input text lacks context/nuance, then the model must have some bias to infer the user's intent. This holds true for any image it generates; not just the politically sensitive ones. For example, if I ask for a picture of a person, and don't get one with pink hair, is that a shortcoming of the model?

You're ignoring that these models are stochastic. If I ask for a nurse and always get an image of a woman in scrubs, then yes, the model exhibits bias. If I get a male nurse half the time, we can say the model is unbiased WRT gender, at least. The same logic applies to CEOs always being old white men, criminals always being Black men, and so on. Stochastic models can output results that when aggregated exhibit a distribution from which we can infer bias or the lack thereof.

◧◩◪◨⬒⬓⬔
66. webmav+TZ7[view] [source] [discussion] 2022-05-26 08:04:34
>>jfoste+971
At the very least we should expect that the results not be more biased than reality. Not all criminals are Black. Not all are men. Not all are poor. If the model (which is stochastic) only outputs poor Black men, rather than a distribution that is closer to reality, it is exhibiting bias and it is fair to ask why the data it picked that bias up from is not reflective of reality.
replies(1): >>jfoste+718
◧◩◪◨⬒⬓⬔⧯
67. jfoste+718[view] [source] [discussion] 2022-05-26 08:18:15
>>webmav+TZ7
Yeah, it makes sense for the results to simply reflect reality as closely as possible. No bias in any direction is desirable.
replies(1): >>webmav+81b
◧◩◪◨⬒⬓⬔⧯
68. webmav+n18[view] [source] [discussion] 2022-05-26 08:21:45
>>rpmism+LI
It isn't wrong, but we aren't talking about the model somehow magically transcending the data it's seen. We're talking about making sure the data it sees is representative, so the results it outputs are as well.

Given that male nurses exist (and though less common, certainly aren't rare), why has the model apparently seen so few?

There actually is a fairly simple explanation: because the images it has seen labelled "nurse" are more likely from stock photography sites rather than photos of actual nurses, and stock photography is often stereotypical rather than typical.

◧◩◪◨⬒⬓
69. andyba+wb9[view] [source] [discussion] 2022-05-26 16:38:46
>>calvin+4c
It's gone now. What was it?
◧◩◪◨⬒⬓⬔⧯▣
70. webmav+81b[view] [source] [discussion] 2022-05-27 09:05:19
>>jfoste+718
Sarcasm, eh? At least there's no way THAT could be taken the wrong way.
[go to top]