zlacker

[parent] [thread] 8 comments
1. alex_d+(OP)[view] [source] 2019-11-28 13:14:19
I know very little about the topic so bearing that in mind:

We're already in a world were we can't quite trust our CPUs, so why trusting baseband chips?

If it does make the design more complicated, it may also reduce the potential attack surface.

replies(2): >>Dyslex+o >>DCKing+V1
2. Dyslex+o[view] [source] 2019-11-28 13:17:42
>>alex_d+(OP)
> If it does make the design more complicated, it may also reduce the potential attack surface.

an increase in complexity would rule out reduction of attack surface. in fact attack surface would be guaranteed to increase

replies(1): >>cyphar+w2
3. DCKing+V1[view] [source] 2019-11-28 13:31:56
>>alex_d+(OP)
We can't fully trust the correctness of modern complicated CPU designs, leading to problems like <insert all speculative bypasses that have affected Intel CPUs the past 2 years>. But despite their complexity, CPUs and the CPU part of a smartphone SoC are usually extremely well understood (relatively speaking). The reason is that you actually need to run your software on these CPUs, so they need to be understood rather well. With better understanding comes better trust.

On the other hand, the baseband processor is mostly unknown, black box hardware, running unknown black box software, that completely controls the transmission of cellular data. Of course it would be horrible if there was no separation between the CPU and baseband. You shouldn't trust that setup. But as it turns out, separation does exist!

replies(1): >>ptx+SE
◧◩
4. cyphar+w2[view] [source] [discussion] 2019-11-28 13:38:02
>>Dyslex+o
Well, that isn't generally true if the complexity is actually a security boundary. After all, all security designs are based on layers -- it's hard to add a layer of security without adding complexity.

As a counter-example -- removing all of Linux's privilege checking would make the code a lot less complicated, but the attack surface would increase a million-fold. In this case, the Librem 5's separation of the baseband such that communication is done over USB (a protocol which doesn't have DMA) is a security improvement over giving the baseband DMA access.

replies(2): >>Dyslex+88 >>megous+ue
◧◩◪
5. Dyslex+88[view] [source] [discussion] 2019-11-28 14:24:57
>>cyphar+w2
> Well, that isn't generally true if the complexity is actually a security boundary.

if the security boundary is baked into the code or the design of the system, and also assuming it doesn't introduce more bugs, then I agree[1]. Security controls that get introduced on top do risk an increase in attack surface. An additional interface is by definition a an additional "surface", the question is if it can be attacked.

[1] you could still argue that more lines of code always means more bugs (but let's assume it's very close to bullet-proof)

replies(1): >>cyphar+Yr
◧◩◪
6. megous+ue[view] [source] [discussion] 2019-11-28 15:28:50
>>cyphar+w2
USB protocols are often times handled in SW, some in Linux kernel, some in userspace. So if someone discovers RCE over USB in Linux USB stack, modem will have direct memory access, or even RCE on the main CPU with kernel privileges.

I have no experience with PCIe so maybe it's harder with USB to abuse the host system, than with PCIe these days.

You can think of USB as being similar to using a TCP/IP protocol between multiple machines capable of executing code, and having to execute code to handle higher level protocols, like HTTP or whatnot. If there's a code execution bug anywhere, the USB capable device will be able to exploit it.

And by default, there's a code-execution bug on all normally configured Linux machines. If you'll not create a USB "firewall", modem can just create a virtual keyboard and kernel will happily accept all input from it, for example. So modem can just type whatever it wants to your shell. It will be obvious, but, it's still device->host RCE.

replies(1): >>cyphar+Nr
◧◩◪◨
7. cyphar+Nr[view] [source] [discussion] 2019-11-28 17:05:00
>>megous+ue
We're not in disagreement (I never claimed or even implied that USB is bug-free) -- but in order to get an RCE or DMA-like access you first need to exploit the USB stack. PCIe gives you that kind of access for free by design (almost -- there is IOMMU these days but there is little evidence that it is nearly as secure as hardware vendors claim, and you'd need to have phone hardware which supports it).
◧◩◪◨
8. cyphar+Yr[view] [source] [discussion] 2019-11-28 17:06:29
>>Dyslex+88
If the alternative to adding an additional interface is to just give DMA access to the device, I'm not sure I see the downside to using the additional interface. Even if the interface ends up being completely broken, at the very least there was something to break before you get DMA / RCE access. What possible interface breakage could trump free and unrestricted DMA access?
◧◩
9. ptx+SE[view] [source] [discussion] 2019-11-28 19:19:59
>>DCKing+V1
> But as it turns out, separation does exist!

The article you linked to says: "There can be an IOMMU with very tight restrictions providing proper isolation or a setup where the IOMMU is effectively not doing anything and permits access to all of the memory. Determining that requires real research."

So it sounds more like separation might or might not exist and you're not likely to find out if it does on your particular device.

[go to top]