zlacker

[return to "Google Web Environment Integrity Is the New Microsoft Trusted Computing"]
1. Knee_P+lp[view] [source] 2023-07-27 06:31:08
>>neelc+(OP)
There is a freedom problem, there is a hardware problem and there is a social problem.

The freedom problem is this: you will not be able to roll your own keys.

This is probably the biggest nail in the coffin for a ton of computers out there. In theory you could simulate via software the workings of a TPM. If you built a kernel module the browser would have no real way of knowing if it sent requests to a piece of hardware or a piece of software. But the fact that you would have to use Microsoft's or Apple's keys makes this completely impossible.

The hardware problem is this: you will not be able to use older or niche/independent hardware.

As we established that software simulation is impossible, this makes a ton of older devices utter e-waste for the near future. Most Chromebooks themselves don't have a TPM, so even though they are guaranteed updates for 10 years how are they going to browse the web? (maybe in that case Google could actually deploy a software TPM with their keys since it's closed source). I have a few old business laptops at home that have a 1.X version of the TPM. In theory it performs just as well as TPM 2.X, but they will not be supported because, again, I will not be able to use my own keys.

Lastly there is the social problem: is DRM the future of the web?

Maybe this trusted computing stuff really is what the web is bound to become, either using your certified TPM keys or maybe your Electronic National ID card or maybe both in order to attest the genuineness of the device that is making the requests. Maybe the Wild West era of the web was a silly dream fueled by novelty and inexperience and in the future we will look back and clearly see we needed more guarantees regarding web browsing, just like we need a central authority to guarantee and regulate SSL certificates or domain names.

◧◩
2. raxxor+GM[view] [source] 2023-07-27 09:46:02
>>Knee_P+lp
The wild west internet did perform perfectly. There are some problems here and there that could be improved. None of them are addressed by suggestion like this. This is for control and market reach, nothing else. Secure boot was as well. Evil maid problem is at least believable in a corporate context. These suggestions are just fluffy crap.
◧◩◪
3. kahncl+Xg1[view] [source] 2023-07-27 13:21:25
>>raxxor+GM
Really? Spam, scams, seo trash, bots and AIs, are utterly rampant.

I don’t want Google and Microsoft to have the keys to the kingdom, but on the other hand, I really want a way to know that I’m having genuine interactions with real people.

I wish government was getting more involved here.

◧◩◪◨
4. vetina+oy1[view] [source] 2023-07-27 14:28:55
>>kahncl+Xg1
It won't solve any of these problem.

But you will have to use hardware and software from approved vendors.

◧◩◪◨⬒
5. mike_h+252[view] [source] 2023-07-27 16:34:50
>>vetina+oy1
It can (that's why it's being pursued) and that, ironically enough, could even empower decentralized and P2P networks. Hear me out.

If you look at the history of the internet it's basically a story of decentralized protocols with a choice of clients being outcompeted by centralized services with a single client, usually because centralized services can control spam better (+have incentives to innovate etc, it's not just one issue).

Examples: USENET -> phpBB -> reddit, IRC -> Slack, ISP hosted email -> Gmail -> Facebook Messenger, SMS -> WhatsApp/iMessage, self-hosted git -> GitHub.

The reason spam kills decentralized systems is that all the techniques for fighting it are totally ad-hoc security-through-obscurity tricks combined with large dollops of expensive Big Data and ML processing, all handled by full time teams. It's stuff that's totally out of reach for indy server hosters. Even for the big guys it frequently fails!

Decentralized networks suffer other problems beyond spam due to their reliance on peers being trusted. They're fully open to attack at all times, making it risky and high effort to run nodes. They're open to obscure app-specific DoS attacks. They are riddled with Sybil attacks. They leak private data like sieves. Many features can't be implemented at all. Given all these problems, most users just give up and either outsource hosting or switch to entirely centralized services.

I used to work on the Gmail spam team, and also Bitcoin, so I have direct experience of the problems in both contexts.

Remote attestation (RA) isn't by itself enough to fix these problems, but it's a tool that can solve some of them. Consider that if USENET operators had the ability to reliably identify clients, then USENET would probably have lasted a fair bit longer. Servers wouldn't have needed to make block/allow decisions themselves, they could have simply propagated app identity through the messages. Then you could have killfiled programs as well as people. If SpamBot2000 shows up and starts flooding groups, one command is all it takes to wipe out the spam. Where it gets trickier is if someone releases an NNTP client that has legit users but which can be turned into a spambot, like via scripting features. At that point users would have to make the call themselves, or the client devs would need to find a way to limit how much damage a scripted client can do. So the decision on what is or is not "approved" would be in the hands of the users themselves, in that design.

The above may sound weird, but it's a technique that allows P2P networks with client choice to be competitive against centralised alternatives. And it's worth remembering that for all the talk of the open web and maybe the EU can do this or that, Facebook just did the most successful social network launch in history as a mobile/tablet only app that blocks the EU. A really good reason to not offer a web version is because mobile only services are much easier to defend against spam, again, because mobiles can do RA and browsers cannot. So the web is already losing in this space due to lack of these tools. Denying the web this sort of tech may seem like a short term win but just means that stuff won't be served to browsers at all, and nor will P2P apps that want to be accessible from desktops be able to use it either.

Anyway it's all very theoretical, because at this time Windows doesn't have a workable app-level RA implementation, so it's mobile-only for now anyway (Linux can do it between servers in theory, but not really on the desktop).

◧◩◪◨⬒⬓
6. yjftsj+iA2[view] [source] 2023-07-27 18:27:26
>>mike_h+252
> If SpamBot2000 shows up and starts flooding groups, one command is all it takes to wipe out the spam. Where it gets trickier is if someone releases an NNTP client that has legit users but which can be turned into a spambot, like via scripting features. At that point users would have to make the call themselves, or the client devs would need to find a way to limit how much damage a scripted client can do

At which it comes back to not allowing anything but the most locked-down clients, and disempowering users... and still failing, bcecause all clients can be turned into spam bots with the most trivial application of autohotkey et al.

◧◩◪◨⬒⬓⬔
7. mike_h+SK2[view] [source] 2023-07-27 19:18:42
>>yjftsj+iA2
It's all built in chains:

- The OS can trivially expose to the app whether events are coming from real hardware or another app, information the app can then either report or not report.

- The attested user-agent string given can be extended to include information about any scripts that are driving it, e.g. script hashes.

And so on. Then these things can have reputations computed over them. If there's a script hash that shows up reliably in spam, and never shows up in ham, then you can auto-mark those posts as spam. If the scripts aren't known then messages can be throttled until enough users have voted on whether the messages are spam or not. All this is fairly straightforward to code up, again, in a theoretical world in which operating systems expose information like whether events are emulated or not (today they don't).

The trick is that clients don't have to be locked down. The tech is fundamentally about letting you prove true statements. Those statements can be as complex as needed to allow whatever level of customization and control is desired. The more malleable clients are the more complex it becomes to determine what is and isn't considered OK, but in a decentralized system that policy complexity is up to the end users themselves to decide. They can share logic in the same way USENET users used to share killfiles.

Anyway, my point isn't to try and design a full system here. It's research level stuff. Only to point out that this stuff brings spam/abuse control out of BigTech-only world back into the realm of small scripts that can be written and shared by users in a decentralized way.

◧◩◪◨⬒⬓⬔⧯
8. yjftsj+BM2[view] [source] 2023-07-27 19:28:35
>>mike_h+SK2
> If there's a script hash that shows up reliably in spam, and never shows up in ham, then you can auto-mark those posts as spam. [...] All this is fairly straightforward to code up, again, in a theoretical world in which operating systems expose information like whether events are emulated or not (today they don't).

And in a world that has zero outliers or unusual users. In reality, I guarantee my accessibility software would get flagged as emulated input (because it is) and marked as spam.

◧◩◪◨⬒⬓⬔⧯▣
9. mike_h+Q53[view] [source] 2023-07-27 20:55:53
>>yjftsj+BM2
Again, it's all chainable. If an app is being controlled by accessibility software, the identity of that software can show up in the RA, so readers can say "it's OK if this app is automated as long as it's by something on this community maintained list of genuine accessibility tools".
[go to top]