The important question, only important question IMHO, is how they handle positives. Do they go all guns blazing and arrest the person on the spot? Or do they use a restrained approach and first nicely ask the person if they have any ID, etc? That's the important bit.
What if your child falls victim to a false identification, and then given that children are far less likely to have some form of ID on them than adults, they're stuck for much longer?
Do you trust the British police to take good care of your child? Or will they strip-search her and threaten her with arrest like they did with the then-15-year-old Child Q because they decided that she "smelled of weed"?
Do you really want more unnecessary interactions with the police for yourself or those you care about when your "suspicious behaviour" was having an algorithm judge that your face looked like someone else's?
This is nothing new. It is all about what is reasonable in the circumstances.
The Met have already lied about the scale of false positives[0] by nearly 1000x, and it's not obvious how much better it will get. With the current tech, this rate will get worse as more faces are being looked for. If it's only looking for (I'm guessing) a thousand high-risk targets now and the rate is 1/40, as more and more faces get searched for this problem gets exponentially worse as the risk of feature collisions rise.
Of course, it'll also disproportionately affect ethnic groups who are more represented in this database too, making life for honest members of those groups more difficult than it already is.
The scale is what makes it different. The lack of accountability for the tech and the false confidence it gives police is what makes it different.
[0]: Met's claim was 1/33,000 false positives, actual 1/40 according to this article from last year https://www.bbc.com/news/technology-69055945