zlacker

[return to "Facial Recognition Leads To False Arrest Of Black Man In Detroit"]
1. vmcept+p4[view] [source] 2020-06-24 15:06:21
>>vermon+(OP)
> Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.

The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.

Would have just been "this edge case is not an edge case at all, axe it."

Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.

◧◩
2. JangoS+0a[view] [source] 2020-06-24 15:29:05
>>vmcept+p4
This is so incredibly common, it's embarrassing. I was on an expert panel about "AI and Machine Learning in Healthcare and Life Sciences" back in January, and I made it a point throughout my discussions to keep emphasizing the amount of bias inherent in our current systems, which ends up getting amplified and codified in machine learning systems. Worse yet, it ends up justifying the bias based on the false pretense that the systems built are objective and the data doesn't lie.

Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:

* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6

* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092

* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...

And some other relevant, but not healthcare examples as well:

* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...

* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...

* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...

And here's some good news though:

* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...

◧◩◪
3. mtgp10+BP1[view] [source] 2020-06-25 00:24:36
>>JangoS+0a
>When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon.

>When Amazon's AI recruiting tool inadvertantly filtered out resumes from women

>When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English

There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned. This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.

This is bad science.

◧◩◪◨
4. JangoS+Xd3[view] [source] 2020-06-25 13:56:52
>>mtgp10+BP1
> There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.

I'd consider reading the sources I posted in my comment before responding with ill-conceived notions. Literally every single example I posted linked to the peer-reviewed scientific evidence (cited, published literature) indicating the points I summarized.

The only link I posted without peer-reviewed literature was the last one with the positive outcome, and that's the one I commented had suspect analysis.

[go to top]