My suspicion is that this is mostly happening because platforms that big like google or twitter rely very heavily on machine learning and other AI related technology to ban people. Because honestly, the amount of spam and abuse that are likely happening on these platforms has to be mind boggling high.
So I get why they would try to automate bans.
But after years and years of regular high profile news of false positives, one would think they eventually would change something.
I mean the guy had direct business with Google going on....
Why would they continue like that. Isn't there one single PR person at Google?
The problems are less the automated bans but the missing human support after you got automated banned.
I you got banned go through a reasonable fast human review process then temporary reinstated a day later and fully reinstated a view days later it would be super annoying comparable with all google services being down for a day, but no where close to the degree of damage it causes now.
And lets be honest google could totally affort a human review process, even if they limit it to accounts which have a certain age and had been used from time to time (to make it much harder to abuse it).
But they are as much interested in this as they are in giving out reasons why you are banned, because if they would do you might be able to sue them for arbitrary discrimination against people who fall into some arbitrary category. Or similar.
What law makers should do is to require proper reasons to be given on service termination of any kind, without allowing an opt. out of this of any kind.
This is the part I find baffling. Why can’t they take 10 Google engineer’s worth of salaries, and hire a small army of overseas customer reps to handle cases like this? I realize that no customer support has been in Google’s DNA since the beginning, but this is such a weird hill to die on.
My best guesses:
1. The number of automated scams/attacks and associated support requests is unbounded vs. bounded human labor so it's a losing investment.
2. Machine learning is sufficient for attackers to undo the anti-abuse work on a low number of false positives from human intervention. Throw small behavioral variants of banned scam/attack accounts at support and optimize for highest reinstatement rate. This abuse traffic will be the bulk of what the humans have to deal with.
3. They'd probably be hiring a non-negligable percentage of the same people who are running scams. The risk of insider abuse is untenable.
This is the first time I hear someone making this claim. Is there prior evidence of this being a regular occurrence with outsourced customer support operations?