But Google (and Facebook, and probably some other companies) don't have reasonable processes for disputing or resolving these situations.
Some have said that we should consider Google's challenge: lots of users/activities that need to be monitored and policed. The assumption is that Google could not afford to do this "reasonably" with humans instead of automated systems because the volume is high.
But Google certainly could hire and train humans to follow a process for reviewing and assisting in resolving these cases. They don't. It is doubtful that they cannot afford to do this; I haven't checked their annual report lately, but I'm guessing they still have a healthy profit.
In the unlikely event that involving more humans would be too expensive, then Google should raise their prices (or stop giving so much away for free).
To summarize, there is no excuse for Google to operate this way. They do because they can, and because the damage still falls into the "acceptable losses" column.
It's almost like they could, I don't know, have some AI ethics researcher who could explain to them the pitfalls of letting a bunch of programmers act like their algos are infallible and suggest how to avoid those pitfalls.
Nah, just kidding. You sack her for being an uppity black lady who won't just churn out reports saying Google are perfect, because it hurts the feelings of the programmers and their managers.