As a Dual British/Swedish Citizen, I really do not trust the UK government. They have proven over and over and over, that at every opportunity presented they will increase their own authority. I don’t believe I have personally witnessed any other advanced economy that so ardently marches towards authoritarianism.
So, no matter if it’s a good idea or not. I can’t in good faith advise the UK having more powers. Unfortunately the UK government themselves can sort of just grant themselves more power. So…
[0]: https://e-estonia.com/card-security-risk/
[1]: https://therecord.media/estonia-says-a-hacker-downloaded-286...
I don't get the resistance to a digital/national id in other countries. To us it is quite bizarre.
Some have explained it with a lack of trust between citizens and the country.
But without such digital id it is impossible to have such digital government services as we have here. The government services need to verify and autheticate the citizen, so they only access their own data and not someone who has the same name and birth date by accident.
I don't see how such a system gives the government more powers. It already has all the data on its citizens, but it is spread out, fragmented, stored with multiple conflicting versions, maybe some of it is stored in databases where no one cares about security, etc.
It depends on the country and its relationship with the people. If the people trust that their government represents the people's interests, there is little push-back. In countries where citizens have reason to believe their government is hijacked by interests that do not have their best interests at heart, then every move is viewed with suspicion.
In this case people are tying Digital ID to CBDCs and social credit systems, which is a reasonable thing to do, given this is exactly how China uses them to enforce 15-minute cities with checkpoints between them. All citizens conversations are tracked, their movements are restricted as well [1], and their ability to purchase goods & services are tightly regulated based on their behavior via the social credit system. This is the world that people who are pushing back against this are trying to avoid.
Not repaying loans and using credit cards to get cash -> you're probably bad with money -> lenders are unlikely to get their money back from you.
A lot of individuals saw their credit scores decline during the Great Recession, even if they weren’t involved in subprime lending.
This myth that credit scores are entirely due to your own financial decisions is up there with myths people believe about names or time zones.
Saying that a person’s credit score is entirely due to their own financial decisions is incorrect because it’s overly simplistic, that’s true, although the main factor is that person’s behavior (whether that behavior is their fault or not is a different story). It can also depend on circumstances specific to the person but not directly related to their own actions (e.g. their credit provider revises credit limits across the board due to external factors, so their credit utilization changes too, without them having used any more or less of it).
In addition, and what you’re alluding to, is that these models are continuously revised. A set of behaviors and circumstances that lead to a higher score in one economic environment may not do the same in another.
Credit scores as implemented in for instance the US are not a direct reflection of a person’s moral character or intended as a reward for good behavior. They’re uncaring algorithms optimized solely for determining how risky it is to lend you money, so that financial institutions can more accurately spread that risk across their customers and maximize their profits. This also enables credit providers to give out more credit overall, based on less biased criteria (not unbiased, because models are never perfect and financial circumstances can be proxies for other attributes).
One can feel however one wants about whether this system is good or not. But it’s definitely different in kind to ”social credit” systems like the one China has implemented, which directly takes into account far more non-financial factors and determines far more non-financial outcomes, effectively exerting much more control over many facets of people’s lives.
This is the whole crux of the situation so buying it in a disclaimer misses the point.
Every lender and background investigator I’ve ever interacted with have treated credit score as a social credit marker, but sure, your mileage might vary.
> They’re uncaring algorithms optimized solely for determining how risky it is to lend you money, so that financial institutions can more accurately spread that risk across their customers and maximize their profits.
This is a fallacy; algorithms are “uncaring” in an anthropomorphic sense, yes, they lack a psychological capacity to care, but their designers are very much not, as you admit in the very next sentence.
> But it’s definitely different in kind to ”social credit” systems like the one China has implemented, which directly takes into account far more non-financial factors and determines far more non-financial outcomes, effectively exerting much more control over many facets of people’s lives.
We entirely disagree on this point. Probably because we have different definitions of “non-financial factors” and “non-financial outcomes.”
It maybe doesn’t adress the point you’re interested in, but it doesn’t miss the point I was making, that the goals and mechanisms revolves around how well a person manages credit. For the credit provider everything else is secondary or irrelevant, including whether it’s because you’ve made poor decisions or external factors have screwed you over.
> Every lender and background investigator I’ve ever interacted with have treated credit score as a social credit marker, but sure, your mileage might vary.
This is probably the crux of why we’re not on the same page, because I don’t understand what this means. I’m genuinely asking, what do you mean when you say that they treated it as a social credit score marker? What business did you have with them (or they with you) that didn’t involve whether or not to extend credit? What does the term “social credit score marker” mean to you?
> This is a fallacy; algorithms are “uncaring” in an anthropomorphic sense, yes, they lack a psychological capacity to care, but their designers are very much not, as you admit in the very next sentence.
I don’t see how you explain that it’s a fallacy, and I don’t think it is, but I concede that it’s a confusing word choice - I should probably have just omitted the word “uncaring”. My point was once again that their sole goal is determining the risk of extending a person credit - whether that would be a nice or moral thing to do or not doesn’t factor into it.
> We entirely disagree on this point. Probably because we have different definitions of “non-financial factors” and “non-financial outcomes.”
I assume here that you mean that people’s financial status, including their access to credit, determines a lot of aspects of their lives, too (correct me if I’m wrong). I don’t think any reasonable person disagrees with that. I do however think that you underestimate how constraining it can be when additional variables are factored in to more directly control what you are and aren’t allowed to do, and how.