zlacker

[return to "Facial Recognition Leads To False Arrest Of Black Man In Detroit"]
1. vmcept+p4[view] [source] 2020-06-24 15:06:21
>>vermon+(OP)
> Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.

The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.

Would have just been "this edge case is not an edge case at all, axe it."

Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.

◧◩
2. JangoS+0a[view] [source] 2020-06-24 15:29:05
>>vmcept+p4
This is so incredibly common, it's embarrassing. I was on an expert panel about "AI and Machine Learning in Healthcare and Life Sciences" back in January, and I made it a point throughout my discussions to keep emphasizing the amount of bias inherent in our current systems, which ends up getting amplified and codified in machine learning systems. Worse yet, it ends up justifying the bias based on the false pretense that the systems built are objective and the data doesn't lie.

Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:

* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6

* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092

* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...

And some other relevant, but not healthcare examples as well:

* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...

* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...

* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...

And here's some good news though:

* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...

◧◩◪
3. snapet+7z1[view] [source] 2020-06-24 22:07:07
>>JangoS+0a
I agree 100% about how common it is. The industry also pays lip service about doing something about it. My last job was at a research institution and we had a data ethics czar, who's a very smart (Stats phd) guy and someone I consider a friend. A lot of his job was to go around the org and conferences talking about things like this.

While there's a lot of head nodding, nothing is ever actually addressed in day to day operations. Data scientists barely know what's going on when they throw things through TensorFlow. What matters is the outcome and the confusion matrix at the end.

I say this as someone who works in data and implements AI/ML platforms. Mr. Williams needs to find the biggest ambulance chasing lawyer and file civil suits not only the law enforcement agencies involved, but top down everyone at DataWorks from the president to the data scientist to the lowly engineer who put this in production.

These people have the power to ruin lives. They need to be made an example of and held accountable for the quality of their work.

◧◩◪◨
4. vmcept+6D1[view] [source] 2020-06-24 22:38:58
>>snapet+7z1
Sounds like a license for developing software is inevitable then.
[go to top]