In this context, why is ethical for Microsoft to build the US military a "war planning and operations" cloud as part of the JEDI contract?
In what reality is selling facial-recognition technology to police somehow less ethical than making the US military, in their own terms, "more lethal"?
If Microsoft rejects the police for being human rights abusers, they should do the same for the US military, which has regularly violated human rights around the world.
One can easily argue that many of the US’s current problems have to do with getting that blend incorrect but it doesn’t obviate the idea that done tech should go to one but not the other.
Killing people is really no different anywhere in the world. They're ok with their technology being used to kill, just so long as it's only people in certain place in the world and done by a certain group. What difference is it to me if I'm shot by a cop or a soldier? Either way I'm being shot by someone, likely far more well armed and armoured than myself, at the behest of the government. Who cares whether their uniform is blue or green?
Legally speaking it most certainly is, otherwise cops, soldiers, or criminals killing would be treated the same way. Morally... perhaps it's a grey area but one thing I'm sure you agree with is that there is a massive difference between a military conflict between armed parties, and a conflict between police and anything from unarmed bystanders to criminals. There's a common sense principle of proportionality that hasn't been observed in too long.
Jack the Ripper wasn't a surgeon because "cutting is really no different anywhere in the world".