Unless you're going to hire some independent auditor (that you still have to trust) it seems logically problematic.
There's a counter-argument that there is still useful metadata a server can glean from its users, but it's certainly minimised with a good protocol... like the Signal protocol.
Auditors
Trusted Enclaves (but then you trust Intel)
Signed chain of I/O with full semantics specified (blockchain style).
If Signal /was/ federated it would be a strong hint that the server code stays the same.
And even if it's not the same, people would be able to run their own trusted servers.
As far as my understanding goes, it's hardly possible to even verify that a compiled binary represents a faithfully executed representation of the source instructions, let alone that it will execute that way when run through a modern OS and CPU pipeline.
I would think the objective here is more about releasing server code that can be run independently in a way that 1) doesn't involve signal's infrastructure and 2) allows the client/server interactions to be audited in a way that trust of the server side is unnecessary, regardless of what code it may or may not be running.
The recentish work to get read write access to some Intel CPU's microcode can probably break SGX too. I wouldn't be surprised if the ME code execution flaws could be used that way too.
E2E encryption only helps you verify WHO you are connecting to, not what they are doing with your connection once it is established.
The problem is more generally called trusted computing, with Intel SGX being an implementation (albeit one with a pretty bad track record).
Anyway, Signal is designed to handle all the private bits at the client side with e2ee so you have to put as little trust in the server as possible.
SGX running on centralized servers turns that calculus on it's head by concentrating the benefits of the hack all in one place.