Joanna Rutkowska, Qubes founder, is the person who brought up intel ME as a problem in her paper Intel x86 considered harmful (https://blog.invisiblethings.org/papers/2015/x86_harmful.pdf).
[1]: https://openpowerfoundation.org/ [2]: https://www.crowdsupply.com/raptor-computing-systems/talos-s... [3]: https://www.raptorengineering.com/TALOS/prerelease.php
* Let AMD know that open-sourcing/disabling PSP is important to you [1].
* Contribute to RISC-V. You can buy a RISC-V SoC today [2]. Does your favorite compiler have a RISC-V backend?
[1] https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo... [2] https://www.sifive.com/products/hifive1/
https://www.reddit.com/r/Amd/comments/6krg13/has_there_been_...
If AMD really wanted to, they would have announced something by now. I can commend the AMD rep who continues to push for it, but he can't dictate company policy.
Also, according to libreboot FAQ, even Google was unable to get source for Intel firmware blobs.
[0] https://security.stackexchange.com/questions/118854/attacks-...
The purism laptops do, but afaik, they are the only ones.
https://medium.com/@securitystreak/living-with-qubes-os-r3-2...
One could lock all the devices that can store data: https://blog.invisiblethings.org/papers/2015/state_harmful.p...
"The general idea is to remove the SPI flash chip from the motherboard, and route the wiring to one of the external ports, such as either a standard SD or a USB port, or perhaps even to a custom connector. A Trusted Stick (discussed in the next chapter) would be then plugged into this port before the platform boots, and would be delivering all the required firmware requested by the processor, as well as other firmware and, optionally, all the software for the platform."
Keylogging isn't good either, but if you're using a password manager and/or 2FA then it's not really as big of an issue. It is an issue for your disk encryption passphrase, but I'm hoping that in the future we might be able to remedy that through some 2FA-like system[1]. If we seal disk encryption keys inside TPMs then we have to only come up with a sane security policy (which is obviously the hard part).
Disk controllers are similarly not an issue if you have full-disk encryption (though then your RAM is the weak point because it contains the keys). There was some work in the past about encrypted RAM but I doubt that is going to be a reality soon. The real concern is that a worrying array of devices plugged into your laptop can DMA your memory (USB 3.1, PCI, etc). iommu improves this slightly but from memory there is still some kernel work necessary to make the order in which devices load secure (if you load a device that supports DMA before iommu is loaded then you don't have iommu defences).
[1]: https://www.youtube.com/watch?v=ykG8TGZcfT8 "Beyond Anti-Evil Maid"
https://www.dwheeler.com/essays/scm-security.html
Now, let's say you want to know the compiler isn't a threat. That requires you to know that (a) it does its job correctly, (b) optimizations don't screw up programs esp removing safety checks, and (c) it doesn't add any backdoors. You essentially need a compiler whose implementation can be reviewed against stated requirements to ensure it does what it says, nothing more, nothing less. That's called a verified compiler. Here's what it takes assuming multiple, small passes for easier verification:
1. A precise specification of what each pass does. This might involve its inputs, intermediate states, and its outputs. This needs to be good enough to both spot errors in the code and drive testing.
2. An implementation of each pass done in as readable a way possible in the safest, tooling-assisted language one can find.
3. Optionally, an intermediate representation of each pass side-by-side with the high-level one that talks in terms of expressions, basic control flow (i.e. while construct), stacks, heaps, and so on. The high-level decomposed into low-level operations that still aren't quite assembly.
4. The high-level or intermediate forms side by side with assembly language for them. This will be simplified, well-structured assembly designed for readability instead of performance.
5. An assembler, linker, loader, and/or anything else I'm forgetting that the compiler depends on to produce the final executable. Each of these will be done as above with focus on simplicity. May not be feature complete so much as just enough features to build the compiler. Initial ones are done by hand optionally with helper programs that are easy to do by hand.
6. Combine the ASM of compiler manually or by any trusted applications you have so far. The output must run through assembler, linker, etc. to get the initial executable. Test that and use it to compile the high-level compiler. Now, you're set. Rest of development can be done in high-level language w/ compiler extensions or more optimizations.
7. Formal specification and verification of the above for best results. Already been done with CompCert for C and CakeML for SML. Far as trust, CakeML runs on Isabelle/HOL whose proof checker is smaller than most programs. HOL/Light will make it smaller. This route puts trust mostly in the formal specs with one, small, trusted executable instead of a pile of specs and code. Vast increase in trustworthiness.
@rain1 has a site collecting as many worked examples as possible of small, verified, or otherwise bootstrapping-related work on compilers or interpreters. I contributed a bunch on there, too. I warn it looks rough since it's a work in progress that's focused more on content than presentation. Already has many, many weekends worth of reading for people interested in Trusting Trust solutions. Here it is for your enjoyment or any contributions you might have:
http://opencircuitdesign.com/qflow/
Start with a simple CPU and memories you can hand-check sent to a 0.35-0.5 micron fab that's visually inspectible. Then, after verifying random sample of those, you use the others in boards that make the rest of your hardware and software. You can even try to use them in peripherals like your keyboard or networking. Make a whole cluster of crappy, CPU boards running verified hardware each handling part of the job since it will take a while. You can use untrusted storage if the source and transport CPU's are trusted since you can just use crypto approaches to ensuring data wasn't tampered with in untrusted RAM or storage. Designs exist in CompSci for both.
So, you'll eventually be running synthesis and testing with open-source software, verification with ACL2 a la Jared Davis's work (maybe Milawa modified), visual inspection of final chips, and Beowulf-style clusters to deal with how slow they are. And then use that for each iteration of better tooling. I also considered using image recognition on the pics of the visual trained by all the people reviewing them across the world. More as an aid than replacing people. Would be helpful when transistor count went up, though.
Other links:
https://www.cs.utexas.edu/users/moore/publications/acl2-pape...
https://news.ycombinator.com/item?id=14669377
> Is that running on the chipset or the CPU?
USB 3.x controllers are more complex than predecessors and typically run some firmware on the controller chip to implement functionality which used to be implemented in the OS drivers.