Allowing apps to say "we only run on Google's officially certified unmodified Android devices" and tightly restricting which devices are certified is the part that makes changes like this deeply problematic. Without that, non-Google Android versions are on a fair playing field; if you don't like their rules, you can install Graphene or other alternatives with no downside. With Play Integrity & attestation though you're always living with the risk of being cut off from some essential app (like your bank) that suddenly becomes "Google-Android-Only".
If Play Integrity went away, I'd be much more OK with Google adding restrictions like this - opt in if you like, use alternatives if you don't, and let's see what the market actually wants.
I think this is mainly just an attempt to kill things like newpipe.
Play Integrity is not compliant with any antitrust legislation, that's painfully obvious. The sole and only purpose of this system is to remove non-Google Android forks.
There are a lot of scams targeting vulnerable people and these days attacking the phone is a very "easy" way of doing this.
Now perhaps there is a more forgiving way of implementing it though. So your phone can switch between trusted and "open" mode. But realistically I don't think the demand is big enough for that to actually matter.
Even with play integrity, you should not trust the client. Devices can still be compromised, there are still phony bank apps, there are still keyloggers, etc.
With the Web, things like banks are sort of forced to design apps that do not rely on client trust. With something like play integrity, they might not be. That's a big problem.
It’s funny to see the volume of comments on HN from folks who are outraged at how AI companies ferociously scrape websites, and the comments disliking device attestation, and few comments recognizing those are two sides of the same coin.
Play integrity (and Apple’s PAT) are what allow mobile users to have less headaches than desktops. Not saying it’s a morally good thing (tech is rarely moral one way or the rather) just that it’s a capability with both upsides and downsides for both typical and power users.
Play integrity hugely reduces brute force and compromised device attacks. Yes, it does not eliminate either, but security is a game of statistics because there is rarely a verifiably perfect solution in complex systems.
For most large public apps, the vast majority of signin attempts are malicious. And the vast majority of successful attacks come from non-attested platforms like desktop web. Attestation is a valuable tool here.
The benefits may not be sufficient to offset the harms you see, but if you don’t understand how and why these capabilities are used by services, I’m also suspicious you understand the harms accurately.
Betting on Play Integrity to solve that is betting that devices will become more expensive in the future, that's quite obvious that the opposite is happening, they are getting cheaper and cheaper.
I really wish I wouldn't need to have my money managed by some corporate drones in suits but it's really hard these days to do without a bank account.
This is why I was really into crypto at the beginning; it envisioned giving us control abck over what's ours. But all the KYC crap and the wishes of the speculators for more oversight basically made crypto the same nasty deal as the public banking sector.
As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.
This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.
Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.
What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).
Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.
The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.
Really? Because they've been fine without this feature on desktop for literally decades.
In other words, there aren't many banks that let you take sensitive actions with just a browser and that's been true since the start of online banking.
These days they also apply differential risk analysis based on the device used to submit a transaction and do things to push people towards mobile. For instance in Switzerland there's now a whole standard for encoding invoices in QR codes. To pay those you must use the mobile apps.
Edit: people are getting hung up on the "never accepted browsers" part. It means they only use the browser for unimportant interactions. For important stuff like login or tx auth, they expect the use of separate hardware that's more controlled like a SIM card/mobile radio, smartcard or smartphone app. Yes some banks are more lax than others but in large parts of the world this was always true since the start of online banking.
I guess the smartcard reader is equivalent. But my point is that locking down the OS of the phone is sufficient to establish client trust but not necessary. You should always be allowed to run the app without strong Play Integrity verification but then just be required to scan your hardware token with NFC in every authentication and authorization flow.
You need to attest at least the kernel, firmware, graphics/input drivers, window management system etc because otherwise actions you think are being taken by the user might be issued by malware. You have to know that the app's onPayClicked() event handler is running because the human owner genuinely clicked it (or an app they authorized to automate for them like an a11y app). To get that assurance requires the OS to enforce app communication and isolation via secure boundaries.
My bank does still allow login and txns to be authorized with a smart card reader. You have to type in fragments of the account number to authorize a new recipient. After that you can send additional transactions to that account without hardware auth.
Pure NFC tokens don't work because you need trusted IO.
when I started online banking I used a browser and a TAN list for years. No apps required
What are you talking about? My bank accepts browsers and is a major one.
Imagine if this was done for desktop computers before we had smartphones. That's just crazy.
Relying on hardware-bound keys is fine, but then the scope of the hardware and software stack that needs to be locked down should be severely limited to dedicated, external hardware tokens. Having to lock down the whole OS and service stack is just bad design, plain and simple, since it prioritizes control over freedom.
But just the fact that there are options which have the side effect of making you choose between convenience and digital autonomy is wrong, and I don't think remote attestation should even exist in the toolbox. We should make dedicated hardware solutions work better instead.
There is even government regulator pressure now for financial services to be liable for cases where the user legitimately authorizes a transaction to a party that turns out to be a scammer. Of course the banks want to watch your every move and control your devices. They would be stupid not to given the incentives.
- I can't transfer a single cent if I didn't had my face and documents scanned after installing the bank app.
- I can't have the same bank account logged in two of my devices at the same time, all banks require you to use an account on a "verified" device (previous point).
- If I want to use a desktop to access my bank account, I have to either install a desktop client provided by the bank or be limited to just checking my balance. Some banks doesn't even allow you to log in if you don't have a "verified" device for doing 2FA.
I am very sure my higher ups are cheering with these news, even though it solves none of the problems.
Yeah, I see this mentality a lot on HN (and kinda everywhere for that matter). "Anyone who disagrees with me is evil, and must therefore have evil motives for everything they're doing. The reasonable/innocent explanation they give for why they're doing this must actually be a front for this other shadowy, nefarious motivation that I just made up on the spot, because surely nobody ever does bad things for good reasons. Certainly not those evil people who disagree with me!"
I hate having to defend Google here, because I think this is genuinely a terrible, freedom-destroying move, but malware on Android is a real problem (especially in Brazil, Indonesia, Singapore, and Thailand, where they're rolling this out initially) and this probably will do a lot to solve it. I'm just categorically against the whole idea of taking away the freedom of mentally sound adults "for their own good" regardless of whether it works or not, and this particular case is especially maddening because I'm one of those adults whose freedom is being destroyed.
Play Integrity's highest level of attestation features requires devices to be running a security update which is within a sliding window of 1 year.
LOTS of Android devices have not released a security update in many many years. This forces users to unnecessarily upgrade to higher end OEMs.
Google is effectively pushing out Xiaomi, Huawei, and many others that offer excellent budget options. Google is not just offering you the comfort of not having to fill out CAPTCHAs on your phone, most importantly they are playing monopoly.
Except it's not a seatbelt, it's straitjacket with a seatbelt pattern drawn on it: it restrain the user's freedom in exchange for the illusion of security.
And like a straightjacket, it's imposed without user consent.
The difference with a straightjacket is that there's no doctor involved to determine who really needs it for security against their own weakness and no due process to put boundaries on its use, it's applied to everyone by default.
2. It does not eliminate any meaningful types of fraud. Phishing still works, social engineering still works, stealing TOTP codes still works.
Ultimately I don't need to install a fake app on your phone to steal your money. The vast, vast majority of digital bank fraud is not done this way. The vast majority of fraud happens within real bank apps and real bank websites, in which an unauthorized user has gained account access.
I just steal your password or social engineer your funds or account information.
This also doesn't stop check fraud, wire fraud, or credit card fraud. Again - I don't need a fake bank app to steal your CC. I just send an email to a bad website and you put in your CC - phishing.
How is the attacker supposed to bruteforce anything with 2-3 login attempts?
Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).
So I must have misunderstood the threat model…
Play integrity is just DRM. DRM does not prevent the most common types of attack.
If I have your password, I can steal your money. If I have your CC, I can post unauthorized transactions.
Attestation does not prevent anything. How would attestation prevent malicious login attempts? Have you actually sat down and thought this through? It does not, because that is impossible.
The vast, vast VAST majority of exploits and fraud DO NOT come from compromised devices. They come from unauthorized access, which is only surface level naively prevented by DRM solutions.
For example, HBO Max will prevent unauthorized access for DRM purposes in the sense that I cannot watch a movie without logging in. It WILL NOT prevent access if I log in, or anyone else on Earth logs in. Are you seeing the problem?
If you evolve the smartcard based systems with better I/O capabilities, then you end up with a modern smartphone. At which point you may as well let the user supply their own rather than charging them lots of money for a dedicated device that's not much different.
If they really care about scams, they could remove all these casino-like games on the playstore. But they aren't going to do that because a huge chunk of the playstore revenue comes from those scam games.
That said, the legal obligations around how this works is very different. One of the reasons common advice is use a credit card for online purchases instead if a debit card or checking account link is because of the fact that they have different liability expectations around fraud[0]
[0]: there are of course a multitude of good reasons for this advice generally speaking, but this one is cited a lot
Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved". Fraud and abuse have ground-truth signals in the form of customers getting upset at you because their account got hacked and something bad happened to them.
2. This stuff is also used to block phishing and it works well for that too. I'd explain how, but you wouldn't believe me.
You mention check fraud so maybe you're banking with some US bank that has terrible security. Anywhere outside the USA, using a minimally competent bank means:
• A password isn't enough to get into someone's bank account. Banks don't even use passwords at all. Users must auth by answering a smartcard challenge, or using a keypair stored in a secure element in a smartphone that's been paired with the account via a mailed setup code (usually either PIN or biometric protected).
• There is no such thing as check fraud.
• There is no such thing as credit card phishing either. All CC transactions are authorized in real time using push messaging to the paired mobile apps. To steal money from a credit card you have to confuse the user into authorizing the transaction on their phone, which is possible if they don't pay attention to the name of the merchant displayed on screen, but it's not phishing or credential theft.
If it's really a problem they care about, here's some priorities. (And I'd personally happy if they cared as I have some family members who got scammed by those)
I guess I'm unusual in that I've been using an "online" only bank for 20 years (back then it wasn't so online... I had a stack of UPS overnight envelopes for check deposits), but I cannot imagine patronizing a bank that won't let me log in and do basically anything from a browser.
The reason they used SMS codes for a while is because phones have always tried to block malware from reading your screen or SMS storage whereas PCs don't, and because phones can do remote attestation protocols to the network as part of their login sequence. The SIM card contains keys used to sign challenges, and the network only allows authorized radio firmwares to log on. So by sending a code to a phone you have some cryptographic assurance that it was received by the right user and viewed only by them.
2FA and RA are closely related for that reason. The second factor is dedicated hardware which enforces that only a human can interact with it, and which can prove its identity cryptographically to a remote server. The mobile switching center, in the case of SMS codes.
Obviously, this was a very crude system because malware on the PC could intercept the login after the user authorized, but at least it stopped usage of the account when the user wasn't around. Modern app based systems are much more secure.
This might be the case for a couple of banks - or maybe in one or two specific countries, but broadly, none of what you've said here applies to banks anywhere else in the world.
I am fine with locking down devices that have very limited security purposes. I am fine with my passport containing locked down hardware if it makes it harder to forge. But I am also not browsing the web on my passport, and therefore its security requirements cannot prevent me from removing ads.
There is an entire name for this: dark pattern.
People make this mistake all the time. Its a very common measurement problem, because measuring is actually very hard.
Are we measuring the right thing? Does it mean what we think it means? Companies spend hundreds of billions trying to answer those questions.
2. Not it cannot block phishing because if I get your password, I can get in.
To your points:
- yes, banks in the US use one time codes too. Very smart of you, unfortunately not very creative. Trivial to circumvent in most cases. Email is the worst, SMS better, TOTP best.
TOTP doesn't matter if the user just takes their code and inputs it into whatever field.
- yes there is such a thing as check fraud, you not knowing what it is doesn't matter.
- if I had to authorize each CC transaction on my phone, I'd put a bullet in my head. That's shit.
Meanwhile if attestation does reduce fraud, the ownability (by the user) of the device is now forfeit due to chasing a dragon's tail.
Yes, I can do it now, but this is only because Google allows me to do that on their approved Android distribution, not because they are unable to prevent me from doing it. I don't trust them to not take away that freedom from me as soon as they can be sure that they can afford the anti-trust lawsuit since their core business model is to show me ads.
I know that my bank doesn't care about my browser, but by relying on Play Integrity they are indirectly forcing me to operate in Google's control regime in every other aspect on my device.
I don't want them to control my software stack, period. I don't care if they act as the good guys right now, they have been steadily doing downhill in the moral department and I expect them to continue to do so.
I don't understand how you can act like there is no problem at all with technology like this.
That said, there is one major bank I use that still allows password only.
The losses due to fraudulent CC activity are governed by the FCBA.
It’s shocking how people think companies do this kind of stuff out of good will rather than being forced by law.
[0]: https://en.wikipedia.org/wiki/Web_Environment_Integrity
What could possibly go wrong. It's not only morally questionable no matter what "advantages" it provides Google, but it's also technically ridiculous because _even if every single computing device was attested_, by construction I can still trivially find ways to use them to "brute force" Google logins. The technical "advantage" of attestation immediately drops to 0 once it is actually enforced (this is were the seatbelts analogy falls apart).
Next thing I suggest after forcing remote attestation on all devices is tying these device IDs to government-issued personal ID. Let's see how that goes over. And then for the government to send the killing squad once one of these devices is used to attack Google services. That should also improve security.
Here's the dystopian future we're building, folks. Take it or leave it. After all, it statistically improves security!
If 3 attempts per hour is enough to gain access, then it doesn't seem attestation can save you. I imagine a physical phone farm will still be economically viable in such case.
"Your device is loading a different operating system."
It was very effective when this problem was new. Don't know about the current state of things.
I see creating a mechanism for remote attestation of consumer devices as morally bad because it's a massive transfer of power away from end users to corporations and governments. A scheme where only computers blessed by a handful of megacorporations can be used to interact with the wider world will be used for evil even if current applications are fairly benign.
But how else should Google and their users react? Insist on offering a platform with far more abuse while subjecting users to worse user experiences and websites to more attacks… in the name of abstract freedom?
Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.
This is not the anti-attestation / anti-seatbelt argument many think it is.
All security is mitigation. There is non perfection.
But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.
I think your enthusiasm for melodrama and snark may be clouding your judgment of the actual topic.
Now, you have a bucket of mobile users coming to you with attestation signals saying they’ve come from secure boot, and they are using the right credentials.
And you’ve got another bucket saying they’ve are Android but with no attestation, and also using the right credentials.
You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.
Do you treat the logins the same? Real customers HATES intrusive security like captchas?
Are you understanding the tech better now? The entire problem and solution space are different from what you think they are.
I won't solve the problem for _anyone_ once it is required, because it is trivial to bypass once the incentive is there. This is what kills this technically; it does not even go into the other cons (which really should not be ignored). Seatbelts absolutely do not have this problem.
> All security is mitigation. There is non perfection.
This is an absolutely meaningless tautology. It is perfectly true statement. It adds absolutely nothing to the discussion.
Say I argue in favor "putting a human to verify each and every banking transaction with a phone call to the source and the destination". And then you disagree, saying that there will be costs, waste of time for everyone, and that the security improvement will be minimal at best. And then I counter with "All security is mitigation, there is no perfection!".
Can you see what you're doing here? This is another textbook example of the politician's fallacy (something must be done; this is something; therefore we must do this).
It is trying to bypass the discussion on the actual merits of the proposal as well as its cons by saying "well it does something!" . True, it does something. So what? If the con is bad enough, or if the benefit too small, maybe it's best NOT to do it anyway!
> But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.
Not long we had right here in HN a discussion about the merits of remote attestion for anti-cheating: turns out the "lot of money" is a custom USB mouse (or addon to one) that costs cents to make. Sure, its not zero. You have to go more and more draconian in order to actually make it "a lot of money", but then you'll tell me I'm being melodramatic.
Probably not even that, but it limits liability and that’s the only purpose, just like the manual in your car, nobody will ever read it but it contains a warning for every single thing that could happen.
There are banking systems in some countries that do not even require an ATM/Debit card for automated withdrawals, just an account number and grouping code.
I never had to wait on Dell to type apt update and apt upgrade.
In my entire life, I have never banked anywhere that would let you transact or log in with just a desktop browser. You seem to be convinced this is an edge case but every bank in Europe works this way, as far as I know. There are US financial institutions that would do this, but the US financial system is uniquely fraud prone to a level just not tolerated elsewhere. It lagged years behind on chip-and-PIN cards for instance, and largely never managed to roll it out. The US treats bank account numbers as credentials and other stuff that doesn't apply elsewhere.
Just look at this thread: plenty of people saying what I'm saying. If you bank somewhere that lets people use just a browser to do transactions, you're either in an environment where fraud doesn't matter at all, or you're with a bad bank and should leave them.
TOTP, which you say is best, is considered weak sauce outside the US. I don't know any banks that have used it for a very long time. It's not secure enough. Cheques were phased out decades ago. There are entire generations in Europe who have never even seen a cheque, let alone written one. I think the last time I had a chequebook issued it was in 2004.
IIRC the differences arise because in the US consumer legislation makes merchants liable for refunding fraudulent transactions, so banks and consumers have no incentive to improve security and merchants can't do it except via convoluted and hardly working risk analysis. It's just so easy to do chargebacks there that nobody bothers fixing the infrastructure. This pushes everyone into the arms of Amazon and the like because they have the most data for ML.
Outside the US and especially in Europe, merchants aren't liable for fraudulent transactions if they verified the credentials correctly. It's much harder to do chargebacks as a consequence. Even if a merchant delivered subpar stuff or there was some other commercial dispute, chargebacks are very hard (I tried once and the bank just refused). So liability shifts to banks, unless they can show that the transaction was authorized by the account holder and they had correct information. That means banks and merchants are incentivized to improve security, and they do.
The threat model is entirely different from what your brute force phrase implies, and it is also a threat model that isn't relevant to banking, which was the topic of the discussion in the first place. And more importantly, it doesn't affect the security of the user.
You've never had to wait for Dell to type apt update and apt upgrade, but MacOS users have to wait for Apple to update their computer.
You mention the US as lagging behind Europe, which is true - but I assure you from my experience working in international fintech from the US, there are more people in the world than the entire population of my country with even worse banking security controls by default.
The OEM phones are cheap because the manufacturer sells them at a loss, recouping money by locking them down and pre-installing certain software.
The alternative is that Google is properly regulated, or cheap smartphones phones don't exist.
1. I don't believe this research - measurement is hard. If we just consider using an unattested device as malicious, as we do now with the play integrity API, then you fudge the numbers.
2. Even IF the research is true, relative probability is doing the heavy lifting here.
There's still going to be more malicious attempts from attested devices than those unattested. Why? Because almost everyone is running attested devices. Duh.
Grandma isnt going to load an unsigned binary on her phones. Let's just be fucking for real for one second here.
No, she's gonna take a phone call and write a check, or get an email and go to a sketchy website and enter her login credentials and then open the investable 2FA email and then enter the code she got into the website. Guess what - you don't need a rooted device for that. You just don't.
There are extremely high effort malicious attempts, like trying to remotely rootkit someone's phone, and then low effort ones - like email spam and primitive social engineering.
You guess which ones you actually see in the wild.
Is there a real threat here? Sure. But threat modeling matters. For 99.99% of people, their threat model just does not involve unsigned binaries they manually loaded.
Why are we sacrificing everything to optimize for the 0.01%? When we havent even gotten CLOSE to optimizing the other 99.99%?
Isn't that fucking stupid? Why yes, yes it is.
They did ask me to make a statement to the police, which I did.
Funnily enough when I talked to the police, they said, "Oh, $7k, is that all? Just today we had someone lose over $140k".
How do you even spend $140k on a credit card? Must have been a platinum card or whatever.
I'm in Australia, not sure how different things are here.
People seem to be getting really hung up on this point. Accepting a browser means letting you do everything with nothing but whatever program you want that speaks HTTP. No special apps or authenticators or extra tokens. You should be able to write a plain Python script that sends money whenever it wants, on its own.
European banks do not allow this in my experience, and nothing being posted to this thread indicates otherwise. Apparently there are some banks especially in the USA who just don't care about security at all because they can push fraud costs onto merchants, so they do accept browsers for everything, or they make some trivial effort and if users undermine it using Google Voice or whatever they don't care - that's fine, I overgeneralized by saying "banks" instead of geographically qualifying it. Mea culpa.
But in your case, you need the assistance of something that's not a browser.
I thought that was what you meant too? If you mean TOTP via a QR code exposing the secret, then of course I agree, no banks allow that. But your comment read as a claim that all TOTP solutions were inherently deemed insecure and wouldn't work, and that smartphone based solutions were the only viable alternative outside the US. The code display is of course vulnerable to man-in-the-middle attacks where you trick users into authorizing transactions via fake web pages, but it is not a threat that is deemed serious enough to prevent our whole country from basing our digital infrastructure on code displays.
I think people get hung up on your point about banks not accepting browsers because you don't formulate your point very clearly, and it reads like you claim that they don't accept browsers at all when what you mean is just a browser and nothing else. Most European banks do in fact allow you to do business using a browser - you just have to prove your identity via other means as well. And there are no good security arguments why those means must be in the form of a smartphone app whose security requirements have the side effect of locking you into a business relationship with one of two American tech giants. As you can see, a whole country of almost six million people authenticates everything from bank transactions to naming their kids and buying houses using a system which allows you to use just a code display.
I think the strategy of remote attestation of the whole OS stack up to and including the window manager is a clunky and inelegant approach from an engineering perspective, and from a freedom perspective I think it is immoral and should be illegal. What I could accept would be an on-phone security module with locked down firmware which can simply take control of the whole screen regardless of what the OS is doing, with a clear indicator of when it is active. This allows you to authorize transactions and inspect their contents, and only needs remote attestation of the security module, not the whole OS.
https://www.dr.dk/nyheder/seneste/mitid-kan-digitalt-udelukk...
So my guess is that this is not because they think TOTP is secure enough but rather due to the political aspects of it being centrally run by the government.
The security argument is pretty straightforward and I guess you know it already, because as you say, TOTP is vulnerable to phishing (unless you use some of the anti-bot tech I mentioned elsewhere but it's heuristic and not really robust over the long term). Whereas if you do stuff via an app, not only can malware not authorize transactions, but it can't view your financial details either - privacy being a major plank of financial security that can't be reliably offered via desktop browsers at all, but can via phones.
The alternative you propose is basically a secure hypervisor. Such schemes have been implemented in the past, but it's not ideal technically. For fast payment authorization via NFC, this is actually how it works, which is why when you touch a phone to a terminal to pay for something you don't see any details of the transaction on the display itself, just an animation. The OS doesn't get involved in the transaction at all, it's all handled by the embedded credit card smartcard which is hard-wired to the NFC radio. The OS gets notified and can send configuration messages, but that's about it.
For anything more complex the parallel world still needs to be a full OS that boots up, have display drivers, have touchscreen drivers, text rendering, a network stack, a way to update that software, etc. You end up with a second copy of Android and dual booting, which makes memory pressure intolerable and the devices more expensive. But it's hard to justify that when the base phone OS has become secure enough! It's already multi-tasking and isolating worlds from each other. There are no users outside of HN/Slashdot who would find this arrangement preferable. And as your concern is not fully technical, it's not clear why moving the hardware enforcement around a bit from kernel supervisor to hypervisor would make any difference. This isn't something that can be analyzed technically as it all seems to boil down to fear over the loss of ad blocking.
There are two discussions here, the technical and the one concerned with freedom. I am concerned with both, and I think we need a compromise which doesn't throw out the latter in order to obtain a perfectly secure model.
My concern is not only with ad removal, that was just an example. My concern is digital autonomy in general, and the issue of giving an American company the power to decide what software users around the world are allowed to execute. They can censor software they don't like, and rogue governments can pressure them to censor software that THEY don't like. E.g. the EU who might want to prevent people from installing E2EE apps soon when Chat Control is rolled out.
There are good technical security arguments for phone based solutions over the alternatives, but it doesn't mean that the alternatives are worthless, just that the users have to be a bit more vigilant. I think that is a better compromise in the interest of protecting freedom and democracy.
We are some of the few people who can understand the long-term implications of the different technical solutions and the potential tools it will give private companies and governments to suppress people. If we are not advocating for freedom over convenience, then who will?
https://grapheneos.org/articles/attestation-compatibility-gu...
Most devices are just blocked and won't let you unblock. It is stuck it OS.
You can't even try alternatives.
There needs to be a point where enough is enough, and locking down devices so that you cannot install programs nor practically use custom operating systems on them anymore is way past that line.
[1]: https://palant.info/2023/01/02/south-koreas-online-security-... [2]: https://ee.kaist.ac.kr/en/research-achieve/in-south-korea-ma...
That is to say, banks are not the only entities in existence.
If they really need such high security to avoid scams and losing such large sums of money they should just issue bank customers with a locked down device that can only be used for banking (maybe banks can collaborate on a standard for it so you can have one device for multiple banks). To be clear, I would still probably be strongly against such a proposal but at least we would be talking about a somewhat understandable approach.