- Whether or not Signal's server is open source has nothing to do with security. Signal's security rests on the user's knowledge that the open source client is encrypting messages end to end. With that knowledge, the server code could be anything, and Signal inc. would still not be able to read your messages. In fact, having the server code open source adds absolutely nothing to this security model, because no matter how open source and secure the server code might be, Signal inc. could still be logging messages upstream of it. The security rests only upon the open source client code. The server is completely orthogonal to security.
- Signal's decision to keep early development of the MobileCoin feature set private was valid. Signal is not your weekend node.js module with two stars on Github. When changes get made to the repo, they will be noticed. This might mess up their marketing plan, especially if they weren't even sure whether they were going to end up going live with the feature. Signal is playing in the big leagues, competing with messengers which have billions of dollars in marketing budget, will never ever be even the smallest amount open source, and are selling all your messages to the highest bidder. They can't afford to handicap themselves just to keep some guys on Hacker News happy.
- Signal's decision to keep development to the (private) master branch, instead of splitting the MobileCoin integration into a long-running feature branch is a valid choice. It's a lot of work to keep a feature branch up to date over years, and to split every feature up into the public and non-public components which then get committed to separate branches. This would greatly affect their architecture and slow down shipping for no benefit, given that the open sourceness of the server is orthogonal to security.
This true only when you are exclusively concerned about your messages' content but not about the metadata. As we all know, though, the metadata is the valuable stuff.
There is a second reason it is wrong, though: These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong. (IMO it doesn't help that it's called a PIN. They should rather call it "ultra-secure master passphrase".) This is where a technology called Intel SGX comes into play: It provides remote attestation that the code running on the servers is the real deal, i.e. the trusted and verified code, and not the code with the added backdoor. So yes, the server code does need to be published and verified.
Finally, let's not forget the fact that SGX doesn't seem particularly secure, either[1], so it's even more important that the Signal developers be open about the server code.
[0]: https://signal.org/blog/secure-value-recovery/
[1]: https://blog.cryptographyengineering.com/2020/07/10/a-few-th...
Unfortunately, `rg -i SGX` only yielded the following two pieces of code:
https://github.com/signalapp/Signal-Android/blob/master/libs...
https://github.com/signalapp/Signal-Android/blob/master/libs...
No immediate sign of a fixed hash. Instead, it looks like the code only verifies the certificate chain of some signature? How does this help if we want to verify the server is running a specific version of the code and we cannot trust the certificate issuer (whether it's Intel or Signal)?
I'm probably (hopefully) wrong here, so maybe someone else who's more familiar with the code could chime in here and explain this to me? :)
During remote attestation, the prover (here, Signal's server) create a "quote" that proves it is running a genuine enclave. The quote also includes the MRENCLAVE value.
It sends the quote to the verifier (here, Signal-Andriod), which in turn sends it to Intel Attestation Service (IAS). IAS verifies the quote, then signs the content of the quote, thus signing the MRENCLAVE value. The digital signature is sent back to the verifier.
Assuming that the verifier trusts IAS's public key (e.g., through a certificate), it can verify the digital signature, thus trust the MRENCLAVE value is valid.
The code where the verifier is verifying the IAS signature is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
The code where the MRENCLAVE value is checked is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
Hope this helps!
[0]: https://sgx101.gitbook.io/sgx101/sgx-bootstrap/attestation#r...
[1]: For other people interested in this matter: I've followed the path of the MRENCLAVE variable to the very end,
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
where it gets injected by the build config. The build config, in turn, is available here:
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
(The MRENCLAVE values can be found around line 120.)
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
For completeness, let's also have a look at where the CA certificate(s) come(s) from. The PKIX parameters[1] are retrieved from the trustStore aka iasKeyStore which, as we follow the rabbit hole back up, gets instantiated here:
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
As we can see, the input data comes from
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
`R.raw.ias` in line 20 refers to the file `app/src/main/res/raw/ias.store` in the repository and as we can see from line 25 it's encrypted with the password "whisper" (which seems weird but it looks like this a requirement[2] of the API). I don't have time to look at the file right now but it will probably (hopefully) contain only[3] Intel's CA certificate and not an even broader one. At least this is somewhat suggested by the link I posted earlier:
https://github.com/signalapp/Signal-Android/blob/master/libs...
In any case, it seems clear that the IAS's certificate itself doesn't get pinned. Not that it really matters at this point. Whether the certificate gets pinned or not, an attacker only needs access to the IAS server, anyway, to steal its private key. Then again, trusting a CA (and thus any certificate derived from it) obviously widens the attack vector. OTOH it might be that Intel is running a large array of IAS servers that come and go and there is no guarantee on Intel's part that a pinned certificate will still be valid tomorrow. In this case, the Signal developers obviously can't do anything about that.
[0]: https://twitter.com/matthew_d_green/status/13802817973139742...
[1]: https://docs.oracle.com/javase/8/docs/api/index.html?java/se...
[2]: https://stackoverflow.com/questions/4065379/how-to-create-a-...
[3]: If it contains multiple CA certificates, each one of them will be trusted, compare [1].