zlacker

[parent] [thread] 25 comments
1. red_tr+(OP)[view] [source] 2021-04-07 15:08:20
Is there any mechanism to validate that the code running on Signal's servers is the same as on Github?
replies(11): >>knocte+w >>Someon+z >>Foxbor+S >>mikece+Y >>monoca+31 >>danpal+71 >>edejon+p1 >>hoopho+Q1 >>gorkis+i2 >>jivetu+v2 >>codeth+L41
2. knocte+w[view] [source] 2021-04-07 15:10:40
>>red_tr+(OP)
No.
3. Someon+z[view] [source] 2021-04-07 15:10:49
>>red_tr+(OP)
How would that work? You'd be layering trust on trust, wherein if they're willing to lie about one thing they're willing to lie about confirmation of that same thing (or not).

Unless you're going to hire some independent auditor (that you still have to trust) it seems logically problematic.

replies(1): >>madars+41
4. Foxbor+S[view] [source] 2021-04-07 15:12:51
>>red_tr+(OP)
There isn't, but people are working on getting us there. The first project that comes to mind is "System Transparency".

https://system-transparency.org/

5. mikece+Y[view] [source] 2021-04-07 15:13:22
>>red_tr+(OP)
Seems there should be an API endpoint, similar to a health check endpoint, that allows one to validate that the code on the server matches what's in GitHub. How exactly that would work is beyond me since I'm not a cryptographer but seems like an easy way to let developers/auditors/the curious check to see that the code on the server and GitHub match.
replies(3): >>monoca+q1 >>beacon+K1 >>jhugo+U1
6. monoca+31[view] [source] 2021-04-07 15:13:45
>>red_tr+(OP)
That's basically the same problem as DRM, so no, you can't verify that someone is running only code you want them to run against data you gave them, on hardware they own.
replies(1): >>lxgr+Sv
◧◩
7. madars+41[view] [source] [discussion] 2021-04-07 15:13:49
>>Someon+z
SGX enclaves can attest to the code they are running, so you don't exactly need to take Signal's word on faith.
replies(2): >>eptcyk+b2 >>Someon+Xo
8. danpal+71[view] [source] 2021-04-07 15:14:01
>>red_tr+(OP)
I think the argument would be that with end-to-end encryption this is unnecessary, which is good because it's impossible.

There's a counter-argument that there is still useful metadata a server can glean from its users, but it's certainly minimised with a good protocol... like the Signal protocol.

replies(1): >>cortes+mu
9. edejon+p1[view] [source] 2021-04-07 15:15:09
>>red_tr+(OP)
Yes.

Auditors

Trusted Enclaves (but then you trust Intel)

Signed chain of I/O with full semantics specified (blockchain style).

◧◩
10. monoca+q1[view] [source] [discussion] 2021-04-07 15:15:11
>>mikece+Y

   validate_endpoint() {
     return hash_against_other_file_not_exe();
   }
◧◩
11. beacon+K1[view] [source] [discussion] 2021-04-07 15:16:46
>>mikece+Y
if you assume that the server can lie to you, then it's physically impossible. Any query could be answered by interrogating a copy of the github version of the server and returning the answer.
12. hoopho+Q1[view] [source] 2021-04-07 15:17:04
>>red_tr+(OP)
No.

If Signal /was/ federated it would be a strong hint that the server code stays the same.

And even if it's not the same, people would be able to run their own trusted servers.

replies(1): >>ViViDb+SD
◧◩
13. jhugo+U1[view] [source] [discussion] 2021-04-07 15:17:09
>>mikece+Y
How could that possibly work? The API endpoint of a malicious modified server could just return whatever the API endpoint of the non-malicious non-modified server returns.
◧◩◪
14. eptcyk+b2[view] [source] [discussion] 2021-04-07 15:18:31
>>madars+41
Except SGX enclaves are horribly broken.
replies(1): >>monoca+z3
15. gorkis+i2[view] [source] 2021-04-07 15:19:00
>>red_tr+(OP)
I am curious how this could even possibly be done.

As far as my understanding goes, it's hardly possible to even verify that a compiled binary represents a faithfully executed representation of the source instructions, let alone that it will execute that way when run through a modern OS and CPU pipeline.

I would think the objective here is more about releasing server code that can be run independently in a way that 1) doesn't involve signal's infrastructure and 2) allows the client/server interactions to be audited in a way that trust of the server side is unnecessary, regardless of what code it may or may not be running.

16. jivetu+v2[view] [source] 2021-04-07 15:19:29
>>red_tr+(OP)
Yes, via SGX remote attestation.

https://sgx101.gitbook.io/sgx101/sgx-bootstrap/attestation

◧◩◪◨
17. monoca+z3[view] [source] [discussion] 2021-04-07 15:23:41
>>eptcyk+b2
Like, does an SGX enclave attest that meltdown is patched in microcode? That's one way to pull the keys out.

The recentish work to get read write access to some Intel CPU's microcode can probably break SGX too. I wouldn't be surprised if the ME code execution flaws could be used that way too.

◧◩◪
18. Someon+Xo[view] [source] [discussion] 2021-04-07 16:55:55
>>madars+41
That isn't a solution to the problem being discussed (a provider's server code being verifiable by end users). I'm quite confused by the suggestion that it could be/is.
◧◩
19. cortes+mu[view] [source] [discussion] 2021-04-07 17:22:27
>>danpal+71
Wait, how would end-to-end encryption help this problem at all? I agree that it is impossible (currently), but not sure how E2E helps anything?

E2E encryption only helps you verify WHO you are connecting to, not what they are doing with your connection once it is established.

replies(2): >>gsich+gy >>iudqno+O61
◧◩
20. lxgr+Sv[view] [source] [discussion] 2021-04-07 17:28:50
>>monoca+31
Yet DRM does exist. (Yes, these schemes usually end up getting broken at some point, but so does other software.)

The problem is more generally called trusted computing, with Intel SGX being an implementation (albeit one with a pretty bad track record).

replies(1): >>monoca+qo4
◧◩◪
21. gsich+gy[view] [source] [discussion] 2021-04-07 17:39:23
>>cortes+mu
Because it doesn't matter what the server does in terms of message content.
◧◩
22. ViViDb+SD[view] [source] [discussion] 2021-04-07 18:00:53
>>hoopho+Q1
Federation pretty much guarantees the opposite. There would likely be many servers running many different versions whereby you’d have no way of knowing which are trusted or not. It, by design, distributes trust. This means there are more parties to trust.

Anyway, Signal is designed to handle all the private bits at the client side with e2ee so you have to put as little trust in the server as possible.

23. codeth+L41[view] [source] 2021-04-07 19:49:46
>>red_tr+(OP)
As others have already mentioned there is Intel SGX and the Signal developers indeed say they use it, see

https://news.ycombinator.com/item?id=26729786

◧◩◪
24. iudqno+O61[view] [source] [discussion] 2021-04-07 19:55:50
>>cortes+mu
Because the other end in E2E is your friend's phone, not a server. We call end-to-server encryption "in-flight" encryption.
replies(1): >>cortes+Rx1
◧◩◪◨
25. cortes+Rx1[view] [source] [discussion] 2021-04-07 22:00:29
>>iudqno+O61
Ah, ok I misunderstood
◧◩◪
26. monoca+qo4[view] [source] [discussion] 2021-04-08 19:45:46
>>lxgr+Sv
DRM has only been successful in the space of making easily replicable attacks more expensive than what is being protected by the DRM. Microsoft has talked about this publicly in this great talk on the Xbox One's physical device security. 'We can't stop people hacking, but we can make each hack more expensive than what someone would spend on games on average'. https://www.youtube.com/watch?v=U7VwtOrwceo

SGX running on centralized servers turns that calculus on it's head by concentrating the benefits of the hack all in one place.

[go to top]