zlacker

[return to "Google Web Environment Integrity Is the New Microsoft Trusted Computing"]
1. Knee_P+lp[view] [source] 2023-07-27 06:31:08
>>neelc+(OP)
There is a freedom problem, there is a hardware problem and there is a social problem.

The freedom problem is this: you will not be able to roll your own keys.

This is probably the biggest nail in the coffin for a ton of computers out there. In theory you could simulate via software the workings of a TPM. If you built a kernel module the browser would have no real way of knowing if it sent requests to a piece of hardware or a piece of software. But the fact that you would have to use Microsoft's or Apple's keys makes this completely impossible.

The hardware problem is this: you will not be able to use older or niche/independent hardware.

As we established that software simulation is impossible, this makes a ton of older devices utter e-waste for the near future. Most Chromebooks themselves don't have a TPM, so even though they are guaranteed updates for 10 years how are they going to browse the web? (maybe in that case Google could actually deploy a software TPM with their keys since it's closed source). I have a few old business laptops at home that have a 1.X version of the TPM. In theory it performs just as well as TPM 2.X, but they will not be supported because, again, I will not be able to use my own keys.

Lastly there is the social problem: is DRM the future of the web?

Maybe this trusted computing stuff really is what the web is bound to become, either using your certified TPM keys or maybe your Electronic National ID card or maybe both in order to attest the genuineness of the device that is making the requests. Maybe the Wild West era of the web was a silly dream fueled by novelty and inexperience and in the future we will look back and clearly see we needed more guarantees regarding web browsing, just like we need a central authority to guarantee and regulate SSL certificates or domain names.

◧◩
2. raxxor+GM[view] [source] 2023-07-27 09:46:02
>>Knee_P+lp
The wild west internet did perform perfectly. There are some problems here and there that could be improved. None of them are addressed by suggestion like this. This is for control and market reach, nothing else. Secure boot was as well. Evil maid problem is at least believable in a corporate context. These suggestions are just fluffy crap.
◧◩◪
3. kahncl+Xg1[view] [source] 2023-07-27 13:21:25
>>raxxor+GM
Really? Spam, scams, seo trash, bots and AIs, are utterly rampant.

I don’t want Google and Microsoft to have the keys to the kingdom, but on the other hand, I really want a way to know that I’m having genuine interactions with real people.

I wish government was getting more involved here.

◧◩◪◨
4. vetina+oy1[view] [source] 2023-07-27 14:28:55
>>kahncl+Xg1
It won't solve any of these problem.

But you will have to use hardware and software from approved vendors.

◧◩◪◨⬒
5. mike_h+252[view] [source] 2023-07-27 16:34:50
>>vetina+oy1
It can (that's why it's being pursued) and that, ironically enough, could even empower decentralized and P2P networks. Hear me out.

If you look at the history of the internet it's basically a story of decentralized protocols with a choice of clients being outcompeted by centralized services with a single client, usually because centralized services can control spam better (+have incentives to innovate etc, it's not just one issue).

Examples: USENET -> phpBB -> reddit, IRC -> Slack, ISP hosted email -> Gmail -> Facebook Messenger, SMS -> WhatsApp/iMessage, self-hosted git -> GitHub.

The reason spam kills decentralized systems is that all the techniques for fighting it are totally ad-hoc security-through-obscurity tricks combined with large dollops of expensive Big Data and ML processing, all handled by full time teams. It's stuff that's totally out of reach for indy server hosters. Even for the big guys it frequently fails!

Decentralized networks suffer other problems beyond spam due to their reliance on peers being trusted. They're fully open to attack at all times, making it risky and high effort to run nodes. They're open to obscure app-specific DoS attacks. They are riddled with Sybil attacks. They leak private data like sieves. Many features can't be implemented at all. Given all these problems, most users just give up and either outsource hosting or switch to entirely centralized services.

I used to work on the Gmail spam team, and also Bitcoin, so I have direct experience of the problems in both contexts.

Remote attestation (RA) isn't by itself enough to fix these problems, but it's a tool that can solve some of them. Consider that if USENET operators had the ability to reliably identify clients, then USENET would probably have lasted a fair bit longer. Servers wouldn't have needed to make block/allow decisions themselves, they could have simply propagated app identity through the messages. Then you could have killfiled programs as well as people. If SpamBot2000 shows up and starts flooding groups, one command is all it takes to wipe out the spam. Where it gets trickier is if someone releases an NNTP client that has legit users but which can be turned into a spambot, like via scripting features. At that point users would have to make the call themselves, or the client devs would need to find a way to limit how much damage a scripted client can do. So the decision on what is or is not "approved" would be in the hands of the users themselves, in that design.

The above may sound weird, but it's a technique that allows P2P networks with client choice to be competitive against centralised alternatives. And it's worth remembering that for all the talk of the open web and maybe the EU can do this or that, Facebook just did the most successful social network launch in history as a mobile/tablet only app that blocks the EU. A really good reason to not offer a web version is because mobile only services are much easier to defend against spam, again, because mobiles can do RA and browsers cannot. So the web is already losing in this space due to lack of these tools. Denying the web this sort of tech may seem like a short term win but just means that stuff won't be served to browsers at all, and nor will P2P apps that want to be accessible from desktops be able to use it either.

Anyway it's all very theoretical, because at this time Windows doesn't have a workable app-level RA implementation, so it's mobile-only for now anyway (Linux can do it between servers in theory, but not really on the desktop).

◧◩◪◨⬒⬓
6. mwcamp+UX2[view] [source] 2023-07-27 20:18:24
>>mike_h+252
> at this time Windows doesn't have a workable app-level RA implementation

To make this work, I suppose it will finally be necessary for Windows to disallow all user-space code injection (e.g. in-process hook DLLs), including from assistive technologies. I guess this tightened security could be a per-app opt-in feature, at least initially. UI Automation on Windows 11 may finally be ready to take over the work that in-process injected DLLs (particularly from screen readers) previously did without performance regressions, though as far as I know, this hypothesis hasn't really been tested yet (or if it has, that happened inside the Windows accessibility team at Microsoft after I left). The trick will be to give the third-party screen reader developers a strong incentive to prioritize moving away from third-party code injection, without harming end-users in the process (i.e. not suddenly releasing a browser or OS update that breaks web browsing with screen readers).

What other changes or API additions do you think will be necessary to enable workable app-level RA on Windows?

◧◩◪◨⬒⬓⬔
7. mike_h+l43[view] [source] 2023-07-27 20:48:52
>>mwcamp+UX2
Yes, it's harder for Windows. Desktop operating systems don't have all the details figured out especially around detecting and controlling automation. RA has been around as a concept for decades, and implementations in consoles/phones/servers do pretty much work for a while, but RA that works on general purpose desktop computers is very new and really only Apple has it.

The Windows team would need to at least:

- Get apps using MSIX (package identity)

- Design an API to get an RA for an app that has package identity. Make a proper keychain API (or better) whilst they're at it.

- You don't have to block debuggers or code injection, but if those things occur, that has to leave a trace that shows up in the RA data structure.

- Expose to apps where events come from.

- Compile databases of machine-level PCRs that reflect known good configurations on different boards. Individuals can't do that work, it's too much effort to keep up with all the different manufacturers and Windows versions that are out there. MS would need to offer an attestation service like Apple does.

Some of that stuff is already there because they pushed RA in an enterprise context for a long time. I don't know how widely adopted it is though.

[go to top]