zlacker

[return to "Intel x86 considered harmful – survey of attacks against x86 over last 10 years"]
1. Animat+Xu[view] [source] 2015-10-27 18:52:03
>>chei0a+(OP)
There just have to be backdoors built into the Intel Management Engine. Intel won't disclose what code it executes, so we have to assume there's a backdoor. The question is, whose backdoor.

It would be useful to install some honeypot machines which would appear to be interesting to governments (an ISIS bulletin board, for example) and record every packet going in and out.

◧◩
2. nickps+wJ[view] [source] 2015-10-27 21:08:06
>>Animat+Xu
This is why I laugh about people here that laugh about backdoors in their TRNG, etc. Intel's been backdoored for AMT, etc for a while. Those circuits, due to NRE costs, have to be in most of their chips whether they advertise them or not. They have deep read access into everything in the system with who knows what write access. We also know some of their chipsets have radios in them which might be in the others, permanently or temporarily disabled.

Just a huge black box of interconnected black boxes at least one set of which is definitely a backdoor. And worst thing is I heard it can work when the machine is entirely or somewhat powered down. (!) I don't know for sure because I won't buy one lol. The old stuff less likely to have those features works fine for me with my builds.

Gaisler's stuff and RISC-V are best hope as they're both open hardware plus getting fast. Gaisler's are already quad-core with as much I.P. as people could ever use. Anyone wanting trustworthy hardware knows where to start on building it. CheriBSD on CHERI capability processor is also open-source and can run on a high-end FPGA. So, there's that for use or copying in a Gaisler modification.

◧◩◪
3. throwa+Wl1[view] [source] 2015-10-28 09:56:27
>>nickps+wJ
> Gaisler's stuff and RISC-V are best hope as they're both open hardware plus getting fast. Gaisler's are already quad-core with as much I.P. as people could ever use. Anyone wanting trustworthy hardware knows where to start on building it. CheriBSD on CHERI capability processor is also open-source and can run on a high-end FPGA. So, there's that for use or copying in a Gaisler modification.

How can you trust the FPGA? Or the very closed-source bitstream generator necessary to compile the VHDL/Verilog code?

Assuming you want to manufacture secure processors from these designs, how can you trust the chip fab?

I'm genuinely interested, as I'm not aware of any research into protection from these issues.

◧◩◪◨
4. nickps+eG2[view] [source] 2015-10-29 01:22:08
>>throwa+Wl1
You have several ways to deal with trust issues in hardware:

1. Monitor hardware itself for bad behavior.

2. Monitor and restrict I/O to catch any leaks or evidence of attacks.

3. Use triple, diverse redundancy with voter algorithms for given HW chip and function.

4. Use a bunch of different ones while obfuscating what you're using.

5. Use a trusted process to make the FPGA, ASIC, or both.

I've mainly used No's 2-4 with No 5 being the endgame. I have a method for No 5 but can't publish it. Suffice it to say that almost all strategies involve obfuscation and shellgames where publishing it gives enemies an edge. kerckhoff's principle is wrong against nation-states: obfuscated and diversified combination of proven methods is best security strategy. Now, ASIC development is so difficult and cutting edge that knowing that the processes themselves aren't being subverted is likely impossible.

So, my [unimplemented] strategy focuses on the process, people, and key steps. I can at least give an outline as the core requirements are worth peer review and others' own innovations. We'd all benefit.

1. You must protect your end of the ASIC development.

1-1. Trusted people who won't screw you and with auditing that lets each potentially catch others' schemes.

1-2. Trusted computers that haven't been compromised in software or physically.

1-3. Endpoint protection and energy gapping of those systems to protect I.P. inside with something like data diodes used to release files for fabs.

1-4. Way to ensure EDA tools haven't been subverted in general or at least for you specifically.

2. CRITICAL and feasible. Protect the hand-off of your design details to the mask-making company.

3. Protect the process for making the masks.

3-1. Ensure, as in (1), security of their computers, tools, and processes.

3-2. Their interfaces should be done in such a way that they always do similar things for similar types of chips with same interfaces. Doing it differently signals caution or alarm.

3-3. The physical handling of the mask should be how they always do it and/or automated where possible. Same principle as 3-2.

3-4. Mask production company's ownership and location should be in a country with low corruption that can't compel secret backdoors.

4. Protect the transfer of the mask to the fab.

5. Protect the fab process, at least one set of production units, the same way as (3). Same security principles.

6. Protect the hand-off to the packaging companies.

7. Protect the packaging process. Same security principles as (3).

8. Protect the shipment to your customers.

9. Some of the above apply to PCB design, integration, testing, and shipment.

So, there you have it. It's a bit easier than some people think in some ways. You don't need to own a fab really. However, you do have to understand how mask making and fabbing are used, be able to observe that, have some control over how tooling/software are done, and so on. Plenty of parties and money involved in this. It will add cost to any project doing it which means few will (competitiveness).

I mainly see it as something funded by governments or private parties for increased assurance of sales to government and security-critical sectors. It will almost have to be subsidized by governments or private parties. My hardware guru cleverly suggested that a bunch of smaller governments (eg G-88) might do it as a differentiator and for their own use. Pool their resources.

It's a large undertaking regardless. Far as specifics, I have a model for that and I know one other high-assurance engineer with one. Most people just do clever obfuscation tricks in their designs to detect modifications or brick the system upon their use with optional R.E. of samples. I don't know those tricks and it's too cat n mouse for me. I'm focused at fixing it at the source.

EDIT: I also did another essay tonight on cost of hardware engineering and ways to get it down for OSS hardware. In case you're interested:

https://news.ycombinator.com/item?id=10468534

[go to top]