At that level, "percentage" is an insufficient measure. You want "permillionage", or maybe more colloquially "DPM" for "Defects Per Million" or even "DPB".
You'll still get false positives though, so you provide an appeal process. But what's to prevent the bad actors from abusing the appeal process while leaving your more clueless legitimate users lost in the dust?
(As the joke goes: "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists" [1])
Can you build any vetting process, and associated appeal process, that successfully keeps all the bad actors out, and doesn't exclude your good users? What about those on the edge? Or those that switch? Or those who are busy, or wary?
There's a lot of money riding on that.
[1] https://www.schneier.com/blog/archives/2006/08/security_is_a...
Not that I disagree with your point, but even if we assume 50 billion accounts (6+ for every human on earth), 0.001% of that would still be 'just' 100k, not millions.
For those that don't know, phone companies are easily susceptible to sim-swapping attacks which can make it easy for an attacker to intercept SMS 2fa: https://news.ycombinator.com/item?id=22016212
Edit: looks like OP changed their entire comment while I was replying.
No if you enforce your policies strictly by (machine learning) algorithms it could just be a matter of misinterpreting a different language, slang, irony or something else. Which makes these bans even more infuriating.
Fixed
One thing I believe Microsoft gets right is that suspensions are isolated to the service whose TOS was violated. I.e. violating the hotmail TOS doesn't suspend you from their other services. I think this makes the impact of a false positive less catastrophic, while still removing actual problematic users from the service. This may be an artifact of how teams work together at Microsoft.
It's largely what made Facebook's forcing usage of their account for Oculus users so ass-backwards.
If every action taken against an account by automation is appealed, then the automation becomes worthless.
In gaming forums that are run by the developer, such as the World of Warcraft or League of Legends forums, I have very frequently seen people whining and complaining that their accounts were banned for no reason until a GM or moderator finally pipes in and posts chat logs of the user spamming racial slurs or some other blatant violation of ToS.
At Google's scale and profitability, saying you can't build an appeals process that supports your paying users is just ridiculous. And at this point the collateral damage to Stadia's already tenuous reputation is going to be a lot more than paying someone to vet him manually.
Problem is: can we cultivate machine learning intelligence to be as good as some of the best human arbiters?
It may be an artifact of Microsoft actually being regulated for monopolistic practices.
Why banks have heavy compliance costs? Doing proper AML and KYC costs money and society decided that it was critical enough to bear that cost even in light regulation countries.
A lot of the financial success of those companies is in part the result of not fully taking responsibility for the consequences of their business activity. Eventually they will, under social pressure that this post success represent, or by laws.
Dividing a problem by 10 should get notice. By 100 (eg, Bloom Filters) respect. By 1000, accolades. Dividing a problem by infinity should be recognized for what it is: a logic error, not an accomplishment.
Most times when I'm trying to learn someone else's process instead of dictating my own, I'm creating lists of situations where the outcomes are not good. When I have a 'class', I run it up the chain, with a counter-proposal of a different solution, which hopefully becomes the new policy. Usually, that new policy has a probationary period, and then it sticks. Unless it's unpopular, and then it gets stuck in permanent probation. I may have to formally justify my recommendation, repeatedly. In the meantime I have a lot of information queued up waiting for a tweak to the decision tree. We don't seem to be mimicking that model with automated systems, which I think is a huge mistake that is now verging on self-inflicted wound.
Perhaps stated another way, classifying a piece of data should result in many more actions than are visible to the customer, and only a few classifications should result in a fully automated action. The rest should be organizing the data in a way to expedite a human intervention, either by priority or bucket. I could have someone spend tuesday afternoons granting final dispensations on credit card fraud, and every morning looking at threats of legal action (priority and bucket).
I worked there for more than a decade. The settlement changed behavior - you thought about how to avoid future trust-like behavior.
If a huge amount of wealth is created and 90% of it is captured and the vast majority of it is distributed in share price/dividends then increasing inequality can really fuck up society even while GPD rises.
end users don't want to run their own spam and moderation filters, and they definitely do want them.
I'd use the total number of false positives as the proper measure.
Taken to its logical conclusion, when everything is automated, the people who own the automation don't actually need the rest of the population at all - it becomes redundant. Of course, the "redundant" population might have different ideas about itself...
Doesn't matter. If you're dealing with billions of accounts then you're earning billions of dollars. Just hire more people. Scale must never be an excuse for poor customer service.
From the user's perspective, it's still a pretty good deal. There's a 99.999% chance that you get to use gmail/youtube/etc for free. And a 0.001% chance that you'll end up a statistic, and need to pay a nominal fee for an appeal.
Unfortunately, I don't think the above will ever happen, because it would be a PR nightmare. "Google wants to charge you money, just to appeal a ban!" It's still better than the status quo, where people have almost no recourse when they are banned. But it still sounds way better in the media, if you just pretend as though these things never happen. Hence the status quo - use automated systems to cheaply get to a 99.999% success rate, and spend as little money as possible on the remaining 0.001%
The Telcos never signed up to being a "secure verification code provider". Almost a decade ago, the local Telco industry group told us all:
"SMS is not designed to be a secure communications channel and should not be used by banks for electronic funds transfer authentication,"
https://www.itnews.com.au/news/telcos-declare-sms-unsafe-for...
Any company that uses SMS for 2FA is offloading risk and security to an industry that never expected it, and explicitly seeks to not provide it.
A Telco _desperately_ wants to be able to get you back up and running (making calls and spending money) on a new phone using your existing number before you walk out of the shop. And even more, they want to be able to transfer you across as a customer from a competitor - and have your existing number work on their network.
"Sim Swapping" is a valuable feature for Telcos. They have significant negative incentives to make it difficult. They don't want to secure your PayPal account, and nobody (least of all PayPal) should expect them to do a good job of it, certainly not for free...
And if companies don't want to do it, that should be easy to regulate though. Requiring a human centric appeal process even if it has a fee, and prohibiting blanket account bans (if you get banned on gmail it doesn't affect your android and play store accounts, for example)
There are other provisions that I consider important like not being able to reuse email addresses and requiring the forwarding of email for at least 6 months after any account termination (getting banned from your email address can have disastrous consequences)
The answer is to force google to be open and more transparent through regulations and have to scale up to deal with it and eat into their profits.
The assumption up front should not be that we need to care about protecting their profits.
They probably made a TON of money off of that, and off the credit protection services they offer directly or through subsidiaries.
Google has billions of accounts because it is FREE create them. Which could mean the cost of providing human support is actually too expensive on a per unit basis. The only way to rectify these economics is to charge for the account.
I pay for Google One to store more photos...however I have no clue if this improves my situation. Does the algorithm give me more slack for being a long, paid user? Do I get real customer support in the event I do get flagged? No clue.
I've had a problem with my Amazon account for years now, after Amazon billed me (on my seller account) for something they shouldn't have.
After I complained, they agreed to refund it. Except the refund never arrived.
Asked many times over the years "WTF?", and someone always promises to look into it after agreeing they can see the problem.
Never to be heard from again. Same pattern has happened every single time (many times). Obviously, something about it puts it in the "too hard" basket... :/
Needless to say, I don't use Amazon's services much at all any more unless required for job purposes. And I steer people away from AWS for the same reason too.
Yes, it's pretty simple. Create and enforce some consumer protection laws which require, for example, that any company larger than a certain size is required to establish support offices staffed by humans in every major town. And required to resolve every issue within X days either by fixing the problem or clearly documenting why not. If not, no arbitration allowed, so they are subject to lawsuits if the reason doesn't hold scrutiny.
Problem solved. Companies like goog, facebook et.al. can easily afford this and it'll stop this ridiculous behavior.
It also to some extent protects the companies. Spambots who create a million accounts can't replicate a million humans to show up at the support office, so it establishes a human:human relationship that's completely missing today.
It need not be, as long as the fee is less than the cost. It could be symbolic (say $1). But the problem is that it would be seen as a revenue generator whether it is or not.