zlacker

[parent] [thread] 12 comments
1. codeth+(OP)[view] [source] 2021-04-07 19:25:24
> Whether or not Signal's server is open source has nothing to do with security

This true only when you are exclusively concerned about your messages' content but not about the metadata. As we all know, though, the metadata is the valuable stuff.

There is a second reason it is wrong, though: These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong. (IMO it doesn't help that it's called a PIN. They should rather call it "ultra-secure master passphrase".) This is where a technology called Intel SGX comes into play: It provides remote attestation that the code running on the servers is the real deal, i.e. the trusted and verified code, and not the code with the added backdoor. So yes, the server code does need to be published and verified.

Finally, let's not forget the fact that SGX doesn't seem particularly secure, either[1], so it's even more important that the Signal developers be open about the server code.

[0]: https://signal.org/blog/secure-value-recovery/

[1]: https://blog.cryptographyengineering.com/2020/07/10/a-few-th...

replies(3): >>codeth+r3 >>wolver+Dc >>im3w1l+JB
2. codeth+r3[view] [source] 2021-04-07 19:38:05
>>codeth+(OP)
Addendum: Out of pure interest I just went into a deep dive into the Signal-Android repository and tried to figure out where exactly the SGX remote attestation happens. I figured that somewhere in the app there should be hash or something of the code running on the servers.

Unfortunately, `rg -i SGX` only yielded the following two pieces of code:

https://github.com/signalapp/Signal-Android/blob/master/libs...

https://github.com/signalapp/Signal-Android/blob/master/libs...

No immediate sign of a fixed hash. Instead, it looks like the code only verifies the certificate chain of some signature? How does this help if we want to verify the server is running a specific version of the code and we cannot trust the certificate issuer (whether it's Intel or Signal)?

I'm probably (hopefully) wrong here, so maybe someone else who's more familiar with the code could chime in here and explain this to me? :)

replies(2): >>corbiq+iq3 >>codeth+Uw3
3. wolver+Dc[view] [source] 2021-04-07 20:16:20
>>codeth+(OP)
> These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong.

If I understand what you are saying and what Signal says, Signal anticipates this problem and provides a solution that is arguably optimal:

https://signal.org/blog/secure-value-recovery/

My (limited) understanding is that the master key consists of the user PIN plus c2, a 256 bit code generated by a secure RNG, and that the Signal client uses a key derivation function to maximize the master key's entropy. c2 is stored in SGX on Signal's servers. If the user PIN is sufficiently secure, c2's security won't matter - an attacker with c2 still can't bypass the PIN. If the PIN is not sufficiently secure, as often happens, c2 stored in SGX might be the most secure way to augment it while still making the the data recoverable.

I'd love to hear from a security specialist regarding this scheme. I'm not one and I had only limited time to study the link above.

replies(1): >>codeth+yr
◧◩
4. codeth+yr[view] [source] [discussion] 2021-04-07 21:20:58
>>wolver+Dc
> If I understand what you are saying and what Signal says, Signal anticipates this problem and provides a solution that is arguably optimal

Yep, this is what I meant when I said "This is where a technology called Intel SGX comes into play". :)

And you're right, SGX is better than nothing if you accept that people use insecure PINs. My argument mainly was that

- the UI is designed in the worst possible way and actually encourages people to choose a short insecure PIN instead of recommending a longer one. This means that security guarantees suddenly rest entirely on SGX.

- SGX requires the server code to be verified and published (which it wasn't until yesterday). Without verification, it's all pointless.

> uses a key derivation function to maximize the master key's entropy

Nitpick: Technically, the KDF is deterministic, so it cannot change the entropy and – as the article says – you could still brute-force short PINs (if it weren't for SGX).

> I'd love to hear from a security specialist regarding this scheme. I'm not one and I had only limited time to study the link above.

Have a look at link [1] in my previous comment. :)

5. im3w1l+JB[view] [source] 2021-04-07 22:13:42
>>codeth+(OP)
SGX is just the processor pinky swearing (signed with Intel keys) that everything is totally legit. Nation State Adversaries can and will take Intel's keys and lie.
replies(1): >>codeth+8D
◧◩
6. codeth+8D[view] [source] [discussion] 2021-04-07 22:20:00
>>im3w1l+JB
SGX is also supposed to protect against Signal as a potential adversary, though, as well as against hackers. Or at least that's how I understood the blog article.
◧◩
7. corbiq+iq3[view] [source] [discussion] 2021-04-08 19:49:33
>>codeth+r3
The hash of the code that is running in the enclave is called "MRENCLAVE" in SGX.

During remote attestation, the prover (here, Signal's server) create a "quote" that proves it is running a genuine enclave. The quote also includes the MRENCLAVE value.

It sends the quote to the verifier (here, Signal-Andriod), which in turn sends it to Intel Attestation Service (IAS). IAS verifies the quote, then signs the content of the quote, thus signing the MRENCLAVE value. The digital signature is sent back to the verifier.

Assuming that the verifier trusts IAS's public key (e.g., through a certificate), it can verify the digital signature, thus trust the MRENCLAVE value is valid.

The code where the verifier is verifying the IAS signature is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

The code where the MRENCLAVE value is checked is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

Hope this helps!

replies(1): >>codeth+hK3
◧◩
8. codeth+Uw3[view] [source] [discussion] 2021-04-08 20:28:45
>>codeth+r3
Addendum to the addendum: Whether there's a fixed hash inside the Signal app or not, here's one thing that crossed my mind last night that I have yet to understand:

Let's say we have a Signal-Android client C, and the Signal developers are running two Signal servers A and a B.

Suppose server A is running a publicly verified version of Signal-Server inside an SGX enclave, i.e. the source code is available on GitHub and has been audited, and server B is a rogue server, running a version of Signal-Server that comes with a backdoor. Server B is not running inside an SGX enclave but since it was set up by the Signal developers (or they were forced to do so) it does have the Signal TLS certificates needed to impersonate a legitimate Signal server (leaving aside SGX for a second). To simplify things, let's assume both servers' IPs are hard-coded in the Signal app and the client simply picks one at random.

Now suppose C connects to B to store its c2 value[0] and expects the server to return a remote attestation signature along with the response. What is stopping server B then from forwarding the client's request to A (in its original, encrypted and signed form), taking A's response (including the remote attestation signature) and sending it back to C? That way, server B could get its hands on the crucial secret value c2 and, as a consequence, later on brute-force the client's Signal PIN, without C ever noticing that B is not running the verified version of Signal-Server.

What am I missing here?

Obviously, Signal's cloud infrastructure is much more complicated than that, see [0], so the above example has to be adapted accordingly. In particular, according to the blog post, clients do remote attestation with certain "frontend servers" and behind the frontend servers there are a number of Raft nodes and they all do remote attestation with one another. So the real-life scenario would be a bit more complicated but I wanted to keep it simple. The point, in any case, is this: Since the Signal developers are in possession of all relevant TLS certificates and are also in control of the infrastructure, they can always MITM any of their legitimate endpoints (where the incoming TLS requests from clients get decrypted) and put a rogue server in between.

One possible way out might be to generate the TLS keys inside the SGX enclave, extract the public key through some public interface while keeping the private key in the encrypted RAM. This way, the public key can still be baked into the client apps but the private key cannot be used for attacks like the one above. However, for this the clients would once again need to know the code running on the servers and do remote attestation, which brings us back to my previous question – where in Signal-Android is that hash of the server code[1]?

[0]: https://signal.org/blog/secure-value-recovery/

[1]: More precisely, the code of the frontend enclave, since the blog post[0] states that its the frontend servers that clients do the TLS handshake with:

> We also wanted to offload the client handshake and request validation process to stateless frontend enclaves that are designed to be disposable.

replies(1): >>kijiki+ME3
◧◩◪
9. kijiki+ME3[view] [source] [discussion] 2021-04-08 21:15:34
>>codeth+Uw3
TLS is used, but there is another layer of encryption e2e from the client to inside the enclave. Your MITM server B can decrypt the TLS layer, but still can't see the actual traffic.
replies(1): >>codeth+HF3
◧◩◪◨
10. codeth+HF3[view] [source] [discussion] 2021-04-08 21:22:02
>>kijiki+ME3
Just came back to post this but you beat me to it haha. Thank you! :) I just looked at the SGX 101 book and found the relevant piece: Client and enclave are basically doing a DH key exchange. https://sgx101.gitbook.io/sgx101/sgx-bootstrap/attestation#s...
◧◩◪
11. codeth+hK3[view] [source] [discussion] 2021-04-08 21:45:39
>>corbiq+iq3
Thank you, this indeed helps a lot and I've now found the hashes![1] Moreover, thank you for providing me with some additional keywords that I could google for – this made it much easier to follow the SGX 101 "book"[0] and I think I've got a much better grasp now on how SGX works!

[0]: https://sgx101.gitbook.io/sgx101/sgx-bootstrap/attestation#r...

[1]: For other people interested in this matter: I've followed the path of the MRENCLAVE variable to the very end,

https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

where it gets injected by the build config. The build config, in turn, is available here:

https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

(The MRENCLAVE values can be found around line 120.)

replies(2): >>codeth+TY3 >>codeth+Ta4
◧◩◪◨
12. codeth+TY3[view] [source] [discussion] 2021-04-08 23:36:27
>>codeth+hK3
Matthew Green's tweet[0] has now sent me down the rabbit hole again as I was trying to figure out where the IAS's certificate gets verified. (Without verification the IAS's attestation that the enclave's quote carries the correct signature would obviously be worthless.) I wanted to find out whether it gets pinned or whether Signal trusts a CA. It seems to be the latter:

https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

For completeness, let's also have a look at where the CA certificate(s) come(s) from. The PKIX parameters[1] are retrieved from the trustStore aka iasKeyStore which, as we follow the rabbit hole back up, gets instantiated here:

https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

As we can see, the input data comes from

https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...

`R.raw.ias` in line 20 refers to the file `app/src/main/res/raw/ias.store` in the repository and as we can see from line 25 it's encrypted with the password "whisper" (which seems weird but it looks like this a requirement[2] of the API). I don't have time to look at the file right now but it will probably (hopefully) contain only[3] Intel's CA certificate and not an even broader one. At least this is somewhat suggested by the link I posted earlier:

https://github.com/signalapp/Signal-Android/blob/master/libs...

In any case, it seems clear that the IAS's certificate itself doesn't get pinned. Not that it really matters at this point. Whether the certificate gets pinned or not, an attacker only needs access to the IAS server, anyway, to steal its private key. Then again, trusting a CA (and thus any certificate derived from it) obviously widens the attack vector. OTOH it might be that Intel is running a large array of IAS servers that come and go and there is no guarantee on Intel's part that a pinned certificate will still be valid tomorrow. In this case, the Signal developers obviously can't do anything about that.

[0]: https://twitter.com/matthew_d_green/status/13802817973139742...

[1]: https://docs.oracle.com/javase/8/docs/api/index.html?java/se...

[2]: https://stackoverflow.com/questions/4065379/how-to-create-a-...

[3]: If it contains multiple CA certificates, each one of them will be trusted, compare [1].

◧◩◪◨
13. codeth+Ta4[view] [source] [discussion] 2021-04-09 01:31:37
>>codeth+hK3
Correction: The MRENCLAVE value is the one on line 123, compare the signature of the `KbsEnclave` constructor -> https://github.com/signalapp/Signal-Android/blob/7394b4ac277...
[go to top]