Nowhere there is any precision for a preferred skin color in the query of th user.
So it sorts and gives the most average examples based on the examples that were found on the internet.
Essentially answering the query "SELECT * FROM `non-professional hairstyles` ORDER BY score DESC LIMIT 10".
It's like if you search on Google "best place for wedding night".
You may get 3 places out of 10 in Santorini, Greece.
Yes you could have an human remove these biases because you feel that Sri Lanka is the best place for a wedding, but what if there is a consensus that Santorini is really the most appraised in the forums or websites that were crawled by Google ?
You're telling me those are all the most non-professional hairstyles available? That this is a reasonable assessment? That fairly standard, well-kept, work-appropriate curly black hair is roughly equivalent to the pink-haired, three-foot-wide hairstyle that's one of the only white people in the "unprofessional" search?
Each and everyone of them is less workplace appropriate than, say, http://www.7thavenuecostumes.com/pictures/750x950/P_CC_70594... ?
It's a simple case of sample bias.
Work a lot on adding even more examples, in order to make the algorithms as close as possible to the "average reality".
At some point we may even ultimately reach the state that the robots even collect intelligence directly in the real world, and not on the internet (even closer to reality).
Censoring results sounds the best recipe for a dystopian world where only one view is right.
You know that race has a large effect on hair right?