zlacker

[parent] [thread] 56 comments
1. ibudia+(OP)[view] [source] 2020-06-25 00:21:16
Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

[1]:https://news.ycombinator.com/item?id=21339530

replies(8): >>ineeda+D4 >>nimbiu+x8 >>m0zg+49 >>zamale+Da >>99_00+Yt >>yread+Lv >>air7+T21 >>sgt101+5i1
2. ineeda+D4[view] [source] 2020-06-25 01:03:46
>>ibudia+(OP)
Can you add any details about how an automated system was used to fire you? I'm not familiar with systems like that.
replies(2): >>sillys+f5 >>ibudia+07
◧◩
3. sillys+f5[view] [source] [discussion] 2020-06-25 01:08:06
>>ineeda+D4
Seconded! Fascinating situation.
◧◩
4. ibudia+07[view] [source] [discussion] 2020-06-25 01:23:31
>>ineeda+D4
We had a whole discussion about it here a couple years ago: https://news.ycombinator.com/item?id=17350645
replies(1): >>ineeda+Lg
5. nimbiu+x8[view] [source] 2020-06-25 01:37:16
>>ibudia+(OP)
This shit right here. This is why I don't stop for the inventory control alarms at department store doorways if they go off. I know I've paid, and little sirens are just a nuisance at this point.

This is why I've never stopped for receipt checks, because it's my receipt, and I've paid. The security theatre is just bad comedy.

Just because the machine says I've done a no no, doesn't mean I can't come back and win a lawsuit later. It doesn't abdicate cops from their jobs. I have a winning complexion, so I'll never enjoy a false positive, but if I do, I'll make sure it bankrupts whatever startup incubator garbage decided to shill a replacement for real law enforcement.

replies(2): >>demado+Rn >>wilson+XN
6. m0zg+49[view] [source] 2020-06-25 01:42:19
>>ibudia+(OP)
> The trouble is not that the AI can be wrong

Exactly what I thought when I've read about this. It's not like humans are great at matching faces either. In fact machines have been better at facial recognition for over a decade now. I bet there are hundreds of people (of all races) in prison right now who are there simply because they were mis-identified by a human. Human memory, even in the absence of bias and prejudice, is pretty fallible.

There is a notion of "mixture of experts" in machine learning. It's when you have two or more models that are not, by themselves, sufficiently good to make a robust prediction, but that make different kinds of mistakes, and you use the consensus estimate. The resulting estimate will be better than any model in isolation. The same should be done here - AI should be merely a signal, it is not a replacement for detective work, and what's described in the article is just bad policing. AI has very little to do with that.

7. zamale+Da[view] [source] 2020-06-25 01:58:51
>>ibudia+(OP)
52% is little better than a coin flip. If you have a million individuals in your city, your confidence should be in the ballpark of 99.9999% (1 individual in 1 million). That has really been my concern with this, the software will report any facial match above 75% confidence. Apart from the fact that it appalling confidence, no cop will pay attention to that percentage; immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.

replies(7): >>ssss11+0c >>cortes+Ee >>dtwest+1g >>jml7c5+8z >>loup-v+aK >>threat+RT >>sokolo+f01
◧◩
8. ssss11+0c[view] [source] [discussion] 2020-06-25 02:10:47
>>zamale+Da
What a great comment. This encapsulates my concerns about the topic eloquently. The technology is not ready for use.
◧◩
9. cortes+Ee[view] [source] [discussion] 2020-06-25 02:37:33
>>zamale+Da
It would have to be even higher than that level of accuracy, because every person is going to be 'tested' multiple times.... if everyone's face is scanned 100 times a day, the number of false positives is going to be even higher.
replies(1): >>bigfud+sd1
◧◩
10. dtwest+1g[view] [source] [discussion] 2020-06-25 02:54:23
>>zamale+Da
Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.

replies(9): >>ArtDev+ui >>toofy+vi >>czbond+el >>danans+ko >>bryanr+Yo >>numpad+Ot >>fivre+oB >>tikima+km1 >>BiteCo+sr1
◧◩◪
11. ineeda+Lg[view] [source] [discussion] 2020-06-25 03:01:09
>>ibudia+07
Thanks, that's insane. Sometimes I look at the HR group where I work and I'm astonished at how much (relatively easy to automate) work is still done manually or semi-manually. For example, today I had to send an email to a specific individual for what should be a simple form in the modern & best-of-breed HR system we use.

After reading your story, I am very glad that we probably have in aggregate 2 or 3 full-time employees doing things that might be automated away. It's not like that prevents mindless bureaucracy of all sorts, but something like your situation would certainly never happen.

◧◩◪
12. ArtDev+ui[view] [source] [discussion] 2020-06-25 03:21:05
>>dtwest+1g
Any software developer will tell how the marketing and sales departments will say or spin ANYTHING they can get away with to sell the product.
◧◩◪
13. toofy+vi[view] [source] [discussion] 2020-06-25 03:21:07
>>dtwest+1g
> Technically the software isn't killing anyone, irresponsible users of it are.

Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.

The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.

replies(2): >>jessta+Dn >>dzhiur+up
◧◩◪
14. czbond+el[view] [source] [discussion] 2020-06-25 03:51:53
>>dtwest+1g
You articulated very well what scares me about the next 15 years.

I have written great software, yet it sometimes had bugs or un-intended consequences. I cannot imagine how I'd feel if it were to accidentally alter someones life negatively like this.

◧◩◪◨
15. jessta+Dn[view] [source] [discussion] 2020-06-25 04:21:50
>>toofy+vi
Yep, There are so many psychology studies that show groupthink , people using statements of an authority as a way remove individual responsibility and people overriding their own perceptions to agree with an authority.

"I guess the computer got it wrong" is a terrifying thing for a police officer to say.

◧◩
16. demado+Rn[view] [source] [discussion] 2020-06-25 04:23:55
>>nimbiu+x8
So out of curiosity, you just roll out of Costco with a cart full of food and gear while the receipt checker tries to stop you?
replies(2): >>justin+To >>rainco+Vp
◧◩◪
17. danans+ko[view] [source] [discussion] 2020-06-25 04:29:28
>>dtwest+1g
> Technically the software isn't killing anyone, irresponsible users of it are.

It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.

If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?

Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.

replies(2): >>harlan+l81 >>dtwest+lu1
◧◩◪
18. justin+To[view] [source] [discussion] 2020-06-25 04:36:56
>>demado+Rn
I believe in a regular store you can just roll by the security without letting them check your stuff. They can make a citizens arrest or call the cops if they think you stole something, but at great risk of a lawsuit if they are wrong. However, Costco is a private club. You agree to their terms and conditions as a member of that club, and you must abide by the receipt check or they can ask you to leave. That was my understanding of the situation a decade ago, things may have changed.
◧◩◪
19. bryanr+Yo[view] [source] [discussion] 2020-06-25 04:38:34
>>dtwest+1g
>Technically the software isn't killing anyone, irresponsible users of it are.

Irresponsible users, yes, but in the users who are using the software as it was marketed for use.

◧◩◪◨
20. dzhiur+up[view] [source] [discussion] 2020-06-25 04:44:20
>>toofy+vi
Software flies rockets, planes, ships, cars, factories and just about everything else. Yet somehow LE shouldn't be using it because... they are dumb? Everyone else is smart tho.
replies(3): >>toofy+zr >>zumina+yv >>fivre+YB
◧◩◪
21. rainco+Vp[view] [source] [discussion] 2020-06-25 04:48:54
>>demado+Rn
At most Costco can cancel his membership. In other stores, if they find regular abusers, they can get restraining orders. Fry's electronics used to do that against a few customers.
◧◩◪◨⬒
22. toofy+zr[view] [source] [discussion] 2020-06-25 05:09:19
>>dzhiur+up
Did you respond to the wrong comment? I don’t believe I implied anything close to what you just said.
◧◩◪
23. numpad+Ot[view] [source] [discussion] 2020-06-25 05:38:03
>>dtwest+1g
Maybe software is more like laws, judges generally aren’t guilty of issuing death sentences
24. 99_00+Yt[view] [source] 2020-06-25 05:40:49
>>ibudia+(OP)
>But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

None of those selling points logically lead to the conclusion that it is the ultimate decisions maker.

◧◩◪◨⬒
25. zumina+yv[view] [source] [discussion] 2020-06-25 06:04:44
>>dzhiur+up
If you fly a plane, drive a car or operate a factory, your livelihood and often your life depends on your constantly paying attention to the output of the software and making constant course-correcting adjustments if necessary. And the software itself often has the ability to avoid fatal errors built in. You rely on it in a narrow domain because it is highly reliable within that domain. For example, your vehicle's cruise control will generally not suddenly brake and swerve off the road so you can relax your levels of concentration to some extent. If it were only 52% likely to be maintaining your velocity and heading from moment to moment, you wouldn't trust it for a second.

Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.

26. yread+Lv[view] [source] 2020-06-25 06:07:09
>>ibudia+(OP)
You see it everywhere with AI and other tools. We overly trust them. Even when doctors have a high confidence in their diagnosis, they accept wrong AI-recommended conclusion that contradicts it.

https://www.nature.com/articles/s41591-020-0942-0

Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people

replies(1): >>koz_+PE
◧◩
27. jml7c5+8z[view] [source] [discussion] 2020-06-25 06:44:30
>>zamale+Da
Just to be clear, parent is describing fictional software, not the system in the article. You seem to be conflating the two.
replies(1): >>TheSpi+GF
◧◩◪
28. fivre+oB[view] [source] [discussion] 2020-06-25 07:10:59
>>dtwest+1g
> Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Do you work in a commercial software firm? Have you ever seen your salespeople talk with their customer contacts?

The salespeople and marketing departments at the firms that make this technology and target law enforcement markets are, 100%, full stop, absolutely making claims that you can trust the software to have full control over the situation, and you, the customer, should not worry about whether the software should or should not have that control.

Being able to use something "irresponsibly" and disclaim responsibility because AI made the decision is. a. selling. point. Prospective customers want. to. give. up. that. authority. and. that. responsibility.

Making the sort of decisions we ask this shit to make is hard, if you're a human, because it's emotionally weighty and fraught with doubt, and it should be, because the consequences of making the wrong decision are horrific. But if you're a machine, it's not so hard, because we didn't teach the machines to care about anything other than succeeding at clearly-defined tasks.

It's very easy to make the argument that the machines can't do much more, because that argument is correct given what tech we have currently. But that's not how the tech is sold--it becomes a miracle worker, a magician, because that's what it looks like to laypeople who don't understand that it's just a bunch of linear algebra cobbled together into something that can decide a well-defined question. Nobody's buying a lump of linear algebra, but many people are quite willing to buy a magical, infallible oracle that removes stressful, difficult decisions from their work, especially in the name of doing good.

tl;dr capitalism is a fuck. we can pontificate about the ethical use of the Satan's toys as much as we like; all that banter doesn't matter much when they're successfully sold as God's righteous sword.

◧◩◪◨⬒
29. fivre+YB[view] [source] [discussion] 2020-06-25 07:18:58
>>dzhiur+up
Were you asleep for all coverage of the 737 MAX MCAS, or the technical failures that contributed to multiple warships casually driving into other ships?

https://features.propublica.org/navy-accidents/uss-fitzgeral...

https://features.propublica.org/navy-uss-mccain-crash/navy-i...

Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.

replies(1): >>dzhiur+7R
◧◩
30. koz_+PE[view] [source] [discussion] 2020-06-25 07:48:23
>>yread+Lv
That's interesting. I can imagine in cases like this it's not necessarily that the doctor doubts their own diagnosis, but rather the AI is essentially offering to relieve them of responsibility for it either way.

It's like in human hierarchies - it's often not the person who is more likely to make the best decision who gets to decide, it's the one who is going to bear the consequences of being wrong.

◧◩◪
31. TheSpi+GF[view] [source] [discussion] 2020-06-25 07:54:55
>>jml7c5+8z
Amusing.

At this point facial recognition is fictional software.

◧◩
32. loup-v+aK[view] [source] [discussion] 2020-06-25 08:34:31
>>zamale+Da
I would hope that the software/ML engineers who wrote it knows about probability theory, and why the prior probability should be set at 0.0001% or so.

So that if we print 52% on the screen, that means we've already gathered like 30-bits of evidence (30 coin flips all coming up heads), at which point the suspicion would be real.

◧◩
33. wilson+XN[view] [source] [discussion] 2020-06-25 09:10:47
>>nimbiu+x8
Is this an attitude that is safe for people of all races, though?

Also, can everyone afford to pursue lawsuits?

replies(1): >>strong+Zp1
◧◩◪◨⬒⬓
34. dzhiur+7R[view] [source] [discussion] 2020-06-25 09:40:29
>>fivre+YB
Airbus has been fly-by-wire for something like 5 decades. They did have some issues but they were solved. So will be 737.
◧◩
35. threat+RT[view] [source] [discussion] 2020-06-25 10:08:18
>>zamale+Da
> 99.9999%

Then the justice system would implode. Judicial policy is "software" too, and nobody holds the judiciary or police to that absurd level of excellence, even if we're talking about the death penalty.

replies(2): >>hrktb+5c1 >>filoel+8Z2
◧◩
36. sokolo+f01[view] [source] [discussion] 2020-06-25 11:12:42
>>zamale+Da
It’s not at all obvious to me that the accuracy threshold should scale with city size. Some small town shouldn’t use a system that is 1000x less accurate.
37. air7+T21[view] [source] 2020-06-25 11:36:00
>>ibudia+(OP)
The problem is not with the technology, but with how it's used. A medical test is also not 100% error-proof which is why a professional needs to interpret the results, sometimes conducting other tests or disregarding it completely.

A cop stopping someone that has a resemblance to a criminal for questioning seems like a good thing to me, as long as the cop knows that there's a reasonable chance it's the wrong guy.

◧◩◪◨
38. harlan+l81[view] [source] [discussion] 2020-06-25 12:18:34
>>danans+ko
A former employer recently got a fraudulent restraining order against me. I’m homeless and encounter the police all the time. I consider it a probable contributing factor to my death, which they are almost certainly pleased about. Nobody in any way has ever seen me as violent, but now I am in a national “workplace violence” protection order database, aka. violent and/or unstable. I am homeless and would rather continue my career than fight it. It seems like it could make people with less to lose turn violent. I feel anger and disappointment like never before. (OpenTable is the company, their engineering leadership are the drivers of this).
◧◩◪
39. hrktb+5c1[view] [source] [discussion] 2020-06-25 12:44:23
>>threat+RT
> even if we're talking about the death penalty.

And that's also the core argument why some countries abolished death penalty.

◧◩◪
40. bigfud+sd1[view] [source] [discussion] 2020-06-25 12:51:42
>>cortes+Ee
We shouldn’t assume those tests and errors are independent, they probably aren’t, but you are right that the overall error rate would be inflated.
41. sgt101+5i1[view] [source] 2020-06-25 13:19:51
>>ibudia+(OP)
Worse - an AI decision puts an obligation on the user to follow it. What do I mean? Well - imagine you are a cop, you get an auto flag to arrest someone and use your discretion to overide it. The person goes on to do something completely different; like they are flagged as a murderer but then go and kill someone DUI. You will be flayed, pilloried. So basically safety first, just do the arrest. The secret is that these systems should not be making calls in this kind of context because they just aren't going to be good enough, it's like cancer diagnosis - the oncologist should have the first say, the machine should be a safety net.
◧◩◪
42. tikima+km1[view] [source] [discussion] 2020-06-25 13:43:38
>>dtwest+1g
The major problem with any solution we have to contend with is the fact that the ratio of appropriate to inappropriate police interactions is unlikely to change regardless of the system or official procedure, so any system that increases the number of police interactions must therefore increase the number of inappropriate police interactions.

Consider that not everyone understands how machine learning, and specifically classifier algorithms work. When a police officer is told the confidence level is above 75% he's going to think that's a low chance of being wrong. He does not have the background in math to realize that given a large enough real population size being classified via facial recognition, a 75% confidence level is utterly useless.

The reported 75% confidence level is only valid when scanning a population size that is at most as large as the training data set's. However, we have no way of decreasing that confidence level to be accurate when comparing against the real world population size of an area without simply making the entire real population the training set. And none of that takes circumstances like low light level or lens distortion into account. The real confidence of a match after accounting for those factors would put nearly all real world use cases below 10%.

Now imagine that the same cop you have to explain this to has already been sold this system by people who work in sales and marketing. Any expectation that ALL police officers will correctly assess the systems results and behave accordingly fails to recognize that cops are human, and above all, cops are not mathematicians or data scientists. Perhaps there are processes to give police officers actionable information and training that would normally avoid problems, but all it takes is one cop getting emotional about one possible match for any carefully designed system to fail.

Again, the frequency of cops getting emotional or simply deciding that even a 10% possibility that someone they are about to question might be dangerous is too high a risk, is unlikely to change. So,providing them a system which increases their number of actionable leads and therefore interactions with the public can only increase the number incidents where police end up brutalizing or even killing someone innocent.

◧◩◪
43. strong+Zp1[view] [source] [discussion] 2020-06-25 14:03:16
>>wilson+XN
> Is this an attitude that is safe for people of all races, though?

Yes, it is. Security cannot stop you for bypassing alarms and receipt checks. They have to have definitive proof that you stole something before they can lay a hand on you. Even in membership stores like Costco, the most they can do is cancel your membership. If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.

replies(2): >>runako+vr1 >>solotr+gM1
◧◩◪
44. BiteCo+sr1[view] [source] [discussion] 2020-06-25 14:12:27
>>dtwest+1g
> But we shouldn't do that.

The average human sucks at understanding probabilities.

Until we can prove that most people handling this system are capabable of smart decision making, which the latest police scandals do not lead to believe right now, those systems should not be used.

◧◩◪◨
45. runako+vr1[view] [source] [discussion] 2020-06-25 14:12:35
>>strong+Zp1
Theory & Law != What Actually Happens

> If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.

This falls squarely into the genre of "yes, you are technically right, but you may have spent a week in jail and thousands to tens of thousands of dollars of time and money to prove it, for which you will not be fully compensated."

replies(1): >>strong+3s1
◧◩◪◨⬒
46. strong+3s1[view] [source] [discussion] 2020-06-25 14:15:51
>>runako+vr1
My point is that anyone can walk by and nothing will likely happen, but if it does, there is some recourse, and the unlikely event is just as unlikely regardless of race.
replies(1): >>jacobu+1N3
◧◩◪◨
47. dtwest+lu1[view] [source] [discussion] 2020-06-25 14:28:13
>>danans+ko
My point was that this technology should not be used as evidence, and should not be grounds to take any forceful action against someone. If a cop abuses this, it is the cop's fault and we should hold them accountable. If the cop acted ignorantly because they were lied to by marketers, their boss, or a software company, those parties should be held accountable as well.

If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.

A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.

replies(1): >>danans+DC1
◧◩◪◨⬒
48. danans+DC1[view] [source] [discussion] 2020-06-25 15:13:47
>>dtwest+lu1
> If your strategy is to get rid of all pretexts for police action, I don't think that is the right one.

Not at all.

> Instead we need to set a high standard of conduct and make sure it is upheld

Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.

◧◩◪◨
49. solotr+gM1[view] [source] [discussion] 2020-06-25 16:07:17
>>strong+Zp1
I am pretty sure in most places they can't touch you even if you do steal something.
◧◩◪
50. filoel+8Z2[view] [source] [discussion] 2020-06-25 23:19:25
>>threat+RT
The justice system would implode if half the innocent people strong-armed into taking plea deals (with threats of much harsher sentences if they go to court) chose not to take them. That “software” is already buggy AF and needs some fundamental fixes. Setting a high standard for some crazy new AI stuff is a smaller change than fixing what’s already broken.
◧◩◪◨⬒⬓
51. jacobu+1N3[view] [source] [discussion] 2020-06-26 08:45:14
>>strong+3s1
That’s just false. Brown people get shot in malls for much less.
replies(1): >>strong+e94
◧◩◪◨⬒⬓⬔
52. strong+e94[view] [source] [discussion] 2020-06-26 12:46:34
>>jacobu+1N3
> Brown people get shot in malls for much les

Not by security or police, so my point still stands.

replies(1): >>jacobu+bc4
◧◩◪◨⬒⬓⬔⧯
53. jacobu+bc4[view] [source] [discussion] 2020-06-26 13:08:50
>>strong+e94
https://en.m.wikipedia.org/wiki/Shooting_of_John_Crawford_II...
replies(1): >>strong+Uj4
◧◩◪◨⬒⬓⬔⧯▣
54. strong+Uj4[view] [source] [discussion] 2020-06-26 13:58:33
>>jacobu+bc4
It's not exclusive to brown people, nor does it happen more to brown people.
replies(1): >>jacobu+Hm4
◧◩◪◨⬒⬓⬔⧯▣▦
55. jacobu+Hm4[view] [source] [discussion] 2020-06-26 14:16:21
>>strong+Uj4
https://www.pnas.org/content/116/34/16793/tab-figures-data
replies(1): >>strong+Mr4
◧◩◪◨⬒⬓⬔⧯▣▦▧
56. strong+Mr4[view] [source] [discussion] 2020-06-26 14:43:47
>>jacobu+Hm4
That doesn't separate out the non-justified killings that we're talking about here. Also, there's no indication that race is the primary cause of the killings.
replies(1): >>jacobu+j46
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
57. jacobu+j46[view] [source] [discussion] 2020-06-27 01:17:24
>>strong+Mr4
How could you? Nobody separates out the non justified killings. That’s half the problem. Your world seems much safer and more just than mine.
[go to top]