This is nothing new. It is all about what is reasonable in the circumstances.
The Met have already lied about the scale of false positives[0] by nearly 1000x, and it's not obvious how much better it will get. With the current tech, this rate will get worse as more faces are being looked for. If it's only looking for (I'm guessing) a thousand high-risk targets now and the rate is 1/40, as more and more faces get searched for this problem gets exponentially worse as the risk of feature collisions rise.
Of course, it'll also disproportionately affect ethnic groups who are more represented in this database too, making life for honest members of those groups more difficult than it already is.
The scale is what makes it different. The lack of accountability for the tech and the false confidence it gives police is what makes it different.
[0]: Met's claim was 1/33,000 false positives, actual 1/40 according to this article from last year https://www.bbc.com/news/technology-69055945
The article does not claim this:
"The Metropolitan Police say that around one in every 33,000 people who walk by its cameras is misidentified.
But the error count is much higher once someone is actually flagged. One in 40 alerts so far this year has been a false positive"
These are 2 different metrics that measure 2 different things and so they are both correct at the same time. But I must say I am not clear what each exactly means.