In this context, why is ethical for Microsoft to build the US military a "war planning and operations" cloud as part of the JEDI contract?
In what reality is selling facial-recognition technology to police somehow less ethical than making the US military, in their own terms, "more lethal"?
If Microsoft rejects the police for being human rights abusers, they should do the same for the US military, which has regularly violated human rights around the world.
One can easily argue that many of the US’s current problems have to do with getting that blend incorrect but it doesn’t obviate the idea that done tech should go to one but not the other.
Killing people is really no different anywhere in the world. They're ok with their technology being used to kill, just so long as it's only people in certain place in the world and done by a certain group. What difference is it to me if I'm shot by a cop or a soldier? Either way I'm being shot by someone, likely far more well armed and armoured than myself, at the behest of the government. Who cares whether their uniform is blue or green?