https://github.com/signalapp/Signal-Server/commit/95f0ce1816...
For normal development, I am advocating an always auditable runtime that runs only public source code by design:- https://observablehq.com/@endpointservices/serverless-cells
Before sending data to a URL, you can look up the source code first, as the URL encodes the source location.
There is always the risk I decided to embed a trojan in the runtime (despite it being open source). However, if I am a service provider for 100k customers built upon the idea of a transparent cloud, then compromising the trust of one customer would cause loss of business across all customers. Thus, from a game-theoretic perspective, our incentives should align.
I think running public source code, which does not preclude injecting secrets and keeping data private, is something that normal development teams can do. No PhDs necessary, just normal development.
Follow me on https://twitter.com/tomlarkworthy if you want to see this different way of approaching privacy: always auditable source available server-side implementations. You can trust services implemented this way are safe, because you can always see how they process data. Even if you cannot be bothered to audit their source, the sheer fact that someone can, inoculates you against bad faith implementations.
I am building a transparent cloud. Everything is encoded in public notebooks and runs open-source https://observablehq.com/collection/@endpointservices/servic... There are other benefits, like being able to fork my implementations and customize, but primarily I am doing this for trust through transparency reasons.
People in the user forum (https://community.signalusers.org/t/where-is-new-signal-serv...) and in other places on the internet were upset for months, because the server wasn't being updated anymore. At the same time, Signal regularly tweetet that "all they do is 100% open source", even at a point in time where no source code was released for almost a year.
Just 2 days ago this was getting picked up by some larger tech news platforms:
https://www.golem.de/news/crypto-messenger-signal-server-nic...
https://www.androidpolice.com/2021/04/06/it-looks-like-signa...
It's normal that Signal ignores its users, but apparently they didn't even reply to press inquiries about the source code. All it would have taken is a clear statement like "we're working on a cool new feature and will release the sources once that's ready, please bear with us". Instead, they left people speculating for months.
This communication strategy, combined with the cryptocurrency announcement, may cause serious harm to Signal's reputation.
Note the endpoint does a DYNAMIC lookup of source code. So you can kinda reassure yourself the endpoint is executing dynamic code just by providing your own source code.
It might be more obvious the runtime does nothing much if you see the runtime https://github.com/endpointservices/serverlesscells
The clever bits that actually implement services are all in the notebooks.
Not oficially, but see https://news.ycombinator.com/item?id=26725117. They stopped publishing code when they started on the cryptocurrency integration.
> Signal had to verify that MobileCoin worked before exposing their users to the technology. That process took a long time because MobileCoin has lots of complicated moving parts.
> With respect to price, no one truly understands the market. It’s impossible to predict future price.
- https://twitter.com/mobilecoin/status/1379830618876338179
Reeks of utter BS. As the reply on this tweet says, features can be developed while being kept switched off with a flag.
cc: @dang
[0] https://news.ycombinator.com/item?id=26345937
[1] The title is the only thing worth reading in this pile of speculation and hand waving.
> The first payments protocol we’ve added support for is a privacy focused payments network called MobileCoin, which has its own currency, MOB.
(Emphasis mine.)
[0] https://signal.org/blog/help-us-test-payments-in-signal/
In before "but it's not free as in no cost". That's why big corps will always fuck over the normies. As it stands, one cannot use the internet without either giving away their privacy or learning a lot about computers and how they work and how to use them.
The majority chose the "i don't care, give me shiny app" route. and they fucked us all over by doing so. There's no right to easy privacy friendly computing. There's only the harsh reality that behind friendly blue and rainbow colored companies sit people that will sell a digital recreation of yourself to anyone who cares to pay and give you a few gigs of free e-mail space and a shiny app for it.
This true only when you are exclusively concerned about your messages' content but not about the metadata. As we all know, though, the metadata is the valuable stuff.
There is a second reason it is wrong, though: These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong. (IMO it doesn't help that it's called a PIN. They should rather call it "ultra-secure master passphrase".) This is where a technology called Intel SGX comes into play: It provides remote attestation that the code running on the servers is the real deal, i.e. the trusted and verified code, and not the code with the added backdoor. So yes, the server code does need to be published and verified.
Finally, let's not forget the fact that SGX doesn't seem particularly secure, either[1], so it's even more important that the Signal developers be open about the server code.
[0]: https://signal.org/blog/secure-value-recovery/
[1]: https://blog.cryptographyengineering.com/2020/07/10/a-few-th...
For Android at least, builds are reproducible https://signal.org/blog/reproducible-android/ (would be neat if there was one or more third party CI's that also checked that the CI-built app reproduces the one on Google Play Store – or maybe there already are?)
Unfortunately, `rg -i SGX` only yielded the following two pieces of code:
https://github.com/signalapp/Signal-Android/blob/master/libs...
https://github.com/signalapp/Signal-Android/blob/master/libs...
No immediate sign of a fixed hash. Instead, it looks like the code only verifies the certificate chain of some signature? How does this help if we want to verify the server is running a specific version of the code and we cannot trust the certificate issuer (whether it's Intel or Signal)?
I'm probably (hopefully) wrong here, so maybe someone else who's more familiar with the code could chime in here and explain this to me? :)
These are "valid" reasons for keeping the source code private for a year? By whose book? Yours? Certainly not by mine. I wouldn't let any other business abscond from its promise to keep open source open source in spirit and practice, why would I let Signal?
This is some underhanded, sneaky maneuvering I'm more used to seeing from the Amazons and the Facebooks of the world. These are not the actions of an ethically Good organization. And as has already been demonstrated by Moxie in his lust to power, he's more than capable of deviance. On Wire vs Signal: "He claimed that we had copied his work and demanded that we either recreate it without looking at his code, or take a license from him and add his copyright header to our code. We explained that we have not copied his work. His behavior was concerning and went beyond a reasonable business exchange — he claimed to have recorded a phone call with me without my knowledge or consent, and he threatened to go public with information about alleged vulnerabilities in Wire’s implementation that he refused to identify." [1]
These are not the machinations of the crypto-idealist, scrappy underdog for justice we are painted by such publications as the New Yorker. This is some straight up cartoon villain twirling their moustache plotting.
So now I'm being sold on a business vision that was just so hot the public's eyes couldn't bear it? We're talking about a pre-mined cryptocurrency that its inventors are laughing themselves to the bank with.
At least Pavel Durov of Telegram is honest with his users. At least we have Element doing their work in the open for all to see with the Matrix protocol. There are better, more ethical, less shady organizations out there who we can and ought to be putting our trust in, not this freakshow of a morally-compromised shamble.
[1] https://medium.com/@wireapp/axolotl-and-proteus-788519b186a7
If I understand what you are saying and what Signal says, Signal anticipates this problem and provides a solution that is arguably optimal:
https://signal.org/blog/secure-value-recovery/
My (limited) understanding is that the master key consists of the user PIN plus c2, a 256 bit code generated by a secure RNG, and that the Signal client uses a key derivation function to maximize the master key's entropy. c2 is stored in SGX on Signal's servers. If the user PIN is sufficiently secure, c2's security won't matter - an attacker with c2 still can't bypass the PIN. If the PIN is not sufficiently secure, as often happens, c2 stored in SGX might be the most secure way to augment it while still making the the data recoverable.
I'd love to hear from a security specialist regarding this scheme. I'm not one and I had only limited time to study the link above.
The CEO of signal messenger LLC was/is the CTO of MOB.
See https://www.reddit.com/r/signal/comments/mm6nad/bought_mobil... and https://www.wired.com/story/signal-mobilecoin-payments-messa...
The contrarian dynamic strikes again: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Btw, the Signal Foundation is a non-profit organization that benefits from community goodwill based on an open-source ethos. So people are critical when its software is closed source.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
We detached this subthread from https://news.ycombinator.com/item?id=26727160.
SGX running on centralized servers turns that calculus on it's head by concentrating the benefits of the hack all in one place.
During remote attestation, the prover (here, Signal's server) create a "quote" that proves it is running a genuine enclave. The quote also includes the MRENCLAVE value.
It sends the quote to the verifier (here, Signal-Andriod), which in turn sends it to Intel Attestation Service (IAS). IAS verifies the quote, then signs the content of the quote, thus signing the MRENCLAVE value. The digital signature is sent back to the verifier.
Assuming that the verifier trusts IAS's public key (e.g., through a certificate), it can verify the digital signature, thus trust the MRENCLAVE value is valid.
The code where the verifier is verifying the IAS signature is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
The code where the MRENCLAVE value is checked is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
Hope this helps!
Let's say we have a Signal-Android client C, and the Signal developers are running two Signal servers A and a B.
Suppose server A is running a publicly verified version of Signal-Server inside an SGX enclave, i.e. the source code is available on GitHub and has been audited, and server B is a rogue server, running a version of Signal-Server that comes with a backdoor. Server B is not running inside an SGX enclave but since it was set up by the Signal developers (or they were forced to do so) it does have the Signal TLS certificates needed to impersonate a legitimate Signal server (leaving aside SGX for a second). To simplify things, let's assume both servers' IPs are hard-coded in the Signal app and the client simply picks one at random.
Now suppose C connects to B to store its c2 value[0] and expects the server to return a remote attestation signature along with the response. What is stopping server B then from forwarding the client's request to A (in its original, encrypted and signed form), taking A's response (including the remote attestation signature) and sending it back to C? That way, server B could get its hands on the crucial secret value c2 and, as a consequence, later on brute-force the client's Signal PIN, without C ever noticing that B is not running the verified version of Signal-Server.
What am I missing here?
Obviously, Signal's cloud infrastructure is much more complicated than that, see [0], so the above example has to be adapted accordingly. In particular, according to the blog post, clients do remote attestation with certain "frontend servers" and behind the frontend servers there are a number of Raft nodes and they all do remote attestation with one another. So the real-life scenario would be a bit more complicated but I wanted to keep it simple. The point, in any case, is this: Since the Signal developers are in possession of all relevant TLS certificates and are also in control of the infrastructure, they can always MITM any of their legitimate endpoints (where the incoming TLS requests from clients get decrypted) and put a rogue server in between.
One possible way out might be to generate the TLS keys inside the SGX enclave, extract the public key through some public interface while keeping the private key in the encrypted RAM. This way, the public key can still be baked into the client apps but the private key cannot be used for attacks like the one above. However, for this the clients would once again need to know the code running on the servers and do remote attestation, which brings us back to my previous question – where in Signal-Android is that hash of the server code[1]?
[0]: https://signal.org/blog/secure-value-recovery/
[1]: More precisely, the code of the frontend enclave, since the blog post[0] states that its the frontend servers that clients do the TLS handshake with:
> We also wanted to offload the client handshake and request validation process to stateless frontend enclaves that are designed to be disposable.
[0]: https://sgx101.gitbook.io/sgx101/sgx-bootstrap/attestation#r...
[1]: For other people interested in this matter: I've followed the path of the MRENCLAVE variable to the very end,
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
where it gets injected by the build config. The build config, in turn, is available here:
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
(The MRENCLAVE values can be found around line 120.)
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
For completeness, let's also have a look at where the CA certificate(s) come(s) from. The PKIX parameters[1] are retrieved from the trustStore aka iasKeyStore which, as we follow the rabbit hole back up, gets instantiated here:
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
As we can see, the input data comes from
https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
`R.raw.ias` in line 20 refers to the file `app/src/main/res/raw/ias.store` in the repository and as we can see from line 25 it's encrypted with the password "whisper" (which seems weird but it looks like this a requirement[2] of the API). I don't have time to look at the file right now but it will probably (hopefully) contain only[3] Intel's CA certificate and not an even broader one. At least this is somewhat suggested by the link I posted earlier:
https://github.com/signalapp/Signal-Android/blob/master/libs...
In any case, it seems clear that the IAS's certificate itself doesn't get pinned. Not that it really matters at this point. Whether the certificate gets pinned or not, an attacker only needs access to the IAS server, anyway, to steal its private key. Then again, trusting a CA (and thus any certificate derived from it) obviously widens the attack vector. OTOH it might be that Intel is running a large array of IAS servers that come and go and there is no guarantee on Intel's part that a pinned certificate will still be valid tomorrow. In this case, the Signal developers obviously can't do anything about that.
[0]: https://twitter.com/matthew_d_green/status/13802817973139742...
[1]: https://docs.oracle.com/javase/8/docs/api/index.html?java/se...
[2]: https://stackoverflow.com/questions/4065379/how-to-create-a-...
[3]: If it contains multiple CA certificates, each one of them will be trusted, compare [1].