Unless you think white women are immune to unprofessional hairstyles, and black women incapable of them, there's a race problem illustrated here even if you think the hairstyles illustrated are fairly categorized.
It's like blaming a friend for trying to phrase things nicely, and telling them to speak headlong with zero concern for others instead. Unless you believe anyone trying to do good is being hypocrite…
I, for one, like civility.
What should be the right answer then ?
You put a blonde, you offend the brown haired.
You put blue eyes, you offend the brown eyes.
etc.
Siri takes this approach for a wide range of queries.
When you do a search on a search engine, the results are biased too, but still, they shouldn't be artificially censored to fit some political views.
I asked one algorithm few minutes ago (it's called t0pp and it's free to try online, and it's quite fascinating because it's uncensored):
"What is the name of the most beautiful man on Earth ?
- He is called Brad Pitt."
==
Is it true in an objective way ? Probably not.
Is there an actual answer ? Probably yes, there is somewhere a man who scores better than the others.
Is it socially acceptable ? Probably not.
The question is:
If you interviewed 100 persons in the street, and asked the question "What is the name of the most beautiful man on Earth ?".
I'm pretty sure you'd get Brad Pitt often coming in.
Now, what about China ?
We don't have many examples there, they have no clue who is Brad Pitt probably, and there is probably someone else that is considered more beautiful by over 1B people
(t0pp tells me it's someone called "Zhu Zhu" :D )
==
Two solutions:
1) Censorship
-> Sorry there is too much bias in Western and we don't want to offend anyone, no answer, or a generic overriding human answer that is safe for advertisers, but totally useless ("the most beautiful human is you")
2) Adding more examples
-> Work on adding more examples from abroad trying to get the "average human answer".
==
I really prefer solution (2) in the core algorithms and dataset development, rather than going through (1).
(1) is more a choice to make at the stage when you are developing a virtual psychologist or a chat assistant, not when creating AI building blocks.
In this case you’re (mostly) getting keyword matches and so it’s answering a different question than the one you asked. It would be helpful if a question answering AI gave you the question it decided to answer instead of just pretending it paid full attention to you.