The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?
In this case, it's incumbent on the software vendors to ensure that less-than-certain results aren't even shown to the user. American police can't generally be trusted to understand nuance and/or do the right thing.