I recently flashed coreboot on my X220 (and it worked surprisingly enough). However, I couldn't find any solid guides on how to set up TianoCore (UEFI) as a payload -- does Qubes require Trusted Boot to be supported on their platforms (I would hope so)? And if so, is there any documentation on how to set up TianoCore as a payload (the documentation is _sparse_ at best, with weird references to VBOOT2 and U-Boot)?
Otherwise I'm not sure how a vendor could fulfill both sets of requirements.
What could one do to make it possible to have ME-less x86 in the future?
Another architecture all together, designed from the ground up to support a free and secure system, seems a better bet.
More generally if the processor's going to have any dynamic internal logic then that has to run somewhere. Frequency scaling, wake-on-lan, microcode updates... you probably do want an ME-style embedded management processor that runs the processor's firmware just as you would for any other peripheral (hard drives, wifi controllers and so on all contain their own embedded ARM cores these days). ME itself isn't the issue - having what runs there be open and inspectable is.
Qualcomm will first minimize its risk going into the PC market by making its mobile chips a bit more optimized for the PC market. But if that goes well and all, it may eventually go "upmarket" with higher-end chips, which will require their own R&D and so on.
However, I don't know if that necessarily means we'll have a more open alternative to Intel. Evidence so far seems to show that Qualcomm may be an even bigger bully than Intel was or is, so I would not look up to Qualcomm to being a savior for the market, but more like the tyrant that replaces the previous tyrant.
Joanna Rutkowska, Qubes founder, is the person who brought up intel ME as a problem in her paper Intel x86 considered harmful (https://blog.invisiblethings.org/papers/2015/x86_harmful.pdf).
Also, the ME appears to be a nice one-stop-shop for compromise. It is the janitor's entrance; it is right there in the name.
A dozen companies with 1000 employees each and a budget of $2,500 per employee gets you $30 million, which is surely enough to get a decent, qubes-secure laptop with no ME. You aren't going to be designing your own chips at that point, but you could grab power8 or sparc or arm.
Are there companies that would reasonably be willing to throw in a few million to fund a secure laptop? I imagine at least a few. And maybe we could get a Google or someone to put in $10m plus.
Quality you can only achieve by possessing the right skills and making the right long term investments.
[1]: https://openpowerfoundation.org/ [2]: https://www.crowdsupply.com/raptor-computing-systems/talos-s... [3]: https://www.raptorengineering.com/TALOS/prerelease.php
* Let AMD know that open-sourcing/disabling PSP is important to you [1].
* Contribute to RISC-V. You can buy a RISC-V SoC today [2]. Does your favorite compiler have a RISC-V backend?
[1] https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo... [2] https://www.sifive.com/products/hifive1/
$100 Million investment isn't a stretch for something from a large company.
This made me download Qubes. Amazing project that seems to care.
https://www.reddit.com/r/Amd/comments/6krg13/has_there_been_...
If AMD really wanted to, they would have announced something by now. I can commend the AMD rep who continues to push for it, but he can't dictate company policy.
This is one of the most important points. The speed at which laptop vendors are releasing new SKUs is staggering. I know the whole supply chain is to blame, but apart from a few models, the number of different SKUs is way too high.
Is only the architecture open, but the silicon isn't? If so, what's the point of that, is the only thing differentiating it from ARM that there's no licensing fee?
If so, are there any ARM SoC vendors making them in a way that they're relatively free from stuff like Intel ME?
If I can peek and poke around in your RAM as I please, no amount of cleverness is going to save you if my intentions are malicious.
(Don't worry, though, I have no such intentions, and I don't fiddle with other people's RAM as a matter of principle, unless they ask me to. ;-))
Though it's possible they could just offer it as an option on some chips to get both markets which would do the job, but given the sheer diversity of ARM and possibly RISC V SoC vendors, those might be a better starting place than x86.
If such a machine with reasonable specs (I do not expect a 64-core 256-GB-RAM-monster) could be brought down to the 1000 $/€ price range, I would seriously look into it.
(I am not sure how realistic that price range is, though.)
What about Loongson? IIRC, Richard Stallmann uses a Notebook based on it, because it has free firmware. Performance is probably not breathtaking, but it exists. Does somebody know if there are Desktop machines built around that chip?
Also, according to libreboot FAQ, even Google was unable to get source for Intel firmware blobs.
If a laptop does have an internal microphone, I just assume it is on and recording.
They should have offered a bare version with just the board, ram and maybe video card (to ensure comparability). They needed the ~$1k USD hobbyist market.
It seems like the goals were way too high, like the Ubuntu Edge.
For example: compiler is to software as X is to hardware. What is X? And how does one go about creating their own X?
Other than that, I don't assume any other part of the laptop is compromised, but maybe I should. Thanks for asking this thought-provoking question.
Anyway, I'm not going to take the laptop apart and analyze the internal microphone hardware to make sure that the switch actually disables the mic. So even in that case, I'd assume the mic was still on even if the switch was in the off position.
On the other hand, I'd prefer to buy a laptop with a hardware switch for the internal microphone, if one existed, as it's better to have such a switch in case it actually does work as advertised.
The machine does nothing with them unless you give them permission to do something.
My thinking on the subject was roughly that for an attacker to have the ability to spy on me via that mechanism would strongly imply that they already have privileged access to my computer (to be able to active the device and exfiltrate the data).
At that point, personally, I'm far more worried about the data they'd get from my keyboard (specifically credentials for various systems) than I am about them being able to see me sit at a desk.
If my threat model includes backdoored hardware, I'm in a bad place as (I'd expect) would most people be.
[0] https://security.stackexchange.com/questions/118854/attacks-...
The purism laptops do, but afaik, they are the only ones.
Another type of user keeps confidential stuff out of networked computers and the cloud entirely.
Both are worthy defense strategies.
But at any rate it's not unfeasible or unknown right now to deal with 100-200W worth of TDP in big 17" (or even 21"(!!!)) notebooks, and there does seem to be a functional (albeit niche) market for it. So at that range it'd be feasible in principle to stick in a low end POWER8 and smallish but functional GPU and have a "notebook POWER8 system", but it'd be a compromised machine in terms of what we'd normally find desirable in a mobile system.
POWER9 (which I think is still slated to go online in the Summit & Sierra supercomputers this year?) is supposed to have improved energy efficiency and management features, which though aimed at scaleup/scaleout of course might help out a bit in other settings in theory. But even so it'd be a tougher chip to build in an SFF system around let alone a notebook. Any potential buyers would have to care a very great deal about what it brought to the table.
https://medium.com/@securitystreak/living-with-qubes-os-r3-2...
This is why Apple can sit with just a few models year after year, because they are the sole vendor of OSX/MacOS.
One could lock all the devices that can store data: https://blog.invisiblethings.org/papers/2015/state_harmful.p...
"The general idea is to remove the SPI flash chip from the motherboard, and route the wiring to one of the external ports, such as either a standard SD or a USB port, or perhaps even to a custom connector. A Trusted Stick (discussed in the next chapter) would be then plugged into this port before the platform boots, and would be delivering all the required firmware requested by the processor, as well as other firmware and, optionally, all the software for the platform."
Older ThinkPad laptops had physical switch for microphone.
Holy shit where did that come from o_0
> USB 3.0 runs as a binary blob in the BIOS
Is that running on the chipset or the CPU?
So, given we can control most inputs to hardware, and most outputs, it seems possible to objectively identify when the HW is misbehaving (such as "A" produces network output that "B" does not). It wouldn't nail down which piece of hardware was compromised, but it would help identify that hardware is compromised.
It will never be _that_ easy, of course... but it seems possible.
This is about as anaemic as I am. When I can get RISC-V silicon with a DDR4 and PCIe interface, we'll be talking, but this is pretty weak stuff.
They did offer that, but the board alone was $3,700.
Keylogging isn't good either, but if you're using a password manager and/or 2FA then it's not really as big of an issue. It is an issue for your disk encryption passphrase, but I'm hoping that in the future we might be able to remedy that through some 2FA-like system[1]. If we seal disk encryption keys inside TPMs then we have to only come up with a sane security policy (which is obviously the hard part).
Disk controllers are similarly not an issue if you have full-disk encryption (though then your RAM is the weak point because it contains the keys). There was some work in the past about encrypted RAM but I doubt that is going to be a reality soon. The real concern is that a worrying array of devices plugged into your laptop can DMA your memory (USB 3.1, PCI, etc). iommu improves this slightly but from memory there is still some kernel work necessary to make the order in which devices load secure (if you load a device that supports DMA before iommu is loaded then you don't have iommu defences).
[1]: https://www.youtube.com/watch?v=ykG8TGZcfT8 "Beyond Anti-Evil Maid"
They may have my data if I'm compromised, that doesn't mean I want them to have embarrassing video or audio of me as well.
https://www.dwheeler.com/essays/scm-security.html
Now, let's say you want to know the compiler isn't a threat. That requires you to know that (a) it does its job correctly, (b) optimizations don't screw up programs esp removing safety checks, and (c) it doesn't add any backdoors. You essentially need a compiler whose implementation can be reviewed against stated requirements to ensure it does what it says, nothing more, nothing less. That's called a verified compiler. Here's what it takes assuming multiple, small passes for easier verification:
1. A precise specification of what each pass does. This might involve its inputs, intermediate states, and its outputs. This needs to be good enough to both spot errors in the code and drive testing.
2. An implementation of each pass done in as readable a way possible in the safest, tooling-assisted language one can find.
3. Optionally, an intermediate representation of each pass side-by-side with the high-level one that talks in terms of expressions, basic control flow (i.e. while construct), stacks, heaps, and so on. The high-level decomposed into low-level operations that still aren't quite assembly.
4. The high-level or intermediate forms side by side with assembly language for them. This will be simplified, well-structured assembly designed for readability instead of performance.
5. An assembler, linker, loader, and/or anything else I'm forgetting that the compiler depends on to produce the final executable. Each of these will be done as above with focus on simplicity. May not be feature complete so much as just enough features to build the compiler. Initial ones are done by hand optionally with helper programs that are easy to do by hand.
6. Combine the ASM of compiler manually or by any trusted applications you have so far. The output must run through assembler, linker, etc. to get the initial executable. Test that and use it to compile the high-level compiler. Now, you're set. Rest of development can be done in high-level language w/ compiler extensions or more optimizations.
7. Formal specification and verification of the above for best results. Already been done with CompCert for C and CakeML for SML. Far as trust, CakeML runs on Isabelle/HOL whose proof checker is smaller than most programs. HOL/Light will make it smaller. This route puts trust mostly in the formal specs with one, small, trusted executable instead of a pile of specs and code. Vast increase in trustworthiness.
@rain1 has a site collecting as many worked examples as possible of small, verified, or otherwise bootstrapping-related work on compilers or interpreters. I contributed a bunch on there, too. I warn it looks rough since it's a work in progress that's focused more on content than presentation. Already has many, many weekends worth of reading for people interested in Trusting Trust solutions. Here it is for your enjoyment or any contributions you might have:
I see it, and I see the AMD and ARM equivalents, and I'm sitting here wondering how the hell do I buy a decent laptop without that crippling trust hole. AFAICT, one cannot.
I'm willing to pay more for processors that aren't thus afflicted. Is anyone at AMD, Intel et al listening?
If performance is not a huge concern, one could (in theory of course) design software so cpu/memory-hard that the ME is simply unable to perform meaningful key material recovery for FVEY.
http://opencircuitdesign.com/qflow/
Start with a simple CPU and memories you can hand-check sent to a 0.35-0.5 micron fab that's visually inspectible. Then, after verifying random sample of those, you use the others in boards that make the rest of your hardware and software. You can even try to use them in peripherals like your keyboard or networking. Make a whole cluster of crappy, CPU boards running verified hardware each handling part of the job since it will take a while. You can use untrusted storage if the source and transport CPU's are trusted since you can just use crypto approaches to ensuring data wasn't tampered with in untrusted RAM or storage. Designs exist in CompSci for both.
So, you'll eventually be running synthesis and testing with open-source software, verification with ACL2 a la Jared Davis's work (maybe Milawa modified), visual inspection of final chips, and Beowulf-style clusters to deal with how slow they are. And then use that for each iteration of better tooling. I also considered using image recognition on the pics of the visual trained by all the people reviewing them across the world. More as an aid than replacing people. Would be helpful when transistor count went up, though.
Other links:
https://www.cs.utexas.edu/users/moore/publications/acl2-pape...
Right now, I want a secure laptop. I can't by one (nothing which doesn't require binary-blobs), so I decide to make my own.
What do I need?
1. An instruction set. 2. A factory. 3. Customers (and a lot, so power of scale can make it somewhat reasonably priced).
All RISC-V could help with is #1. I won't have to contract from ARM and will save some cash there.
But I'll still need to build a factory and deal with economy of scale.
Moreover, what will prevent companies from leaching off RISC-V and patenting improvements. As I understand, there are so few foundries right now that that can easily cross license patents from each other and prevent upstarts from breaking in (so you'll have a situation where the industry leaders end up organizing themselves into something which looks like ARM or Intel/AMD)?
https://news.ycombinator.com/item?id=14669377
> Is that running on the chipset or the CPU?
USB 3.x controllers are more complex than predecessors and typically run some firmware on the controller chip to implement functionality which used to be implemented in the OS drivers.
I believe so too. OpenPOWER and RISC-V show great promise but I am not aware of any significant tape-outs for either (and not to mention you have to have consumer motherboards et al that are compatible with the chipset).
The nice thing about OpenPOWER is that there are many distributions (openSUSE is one that I know for sure) that provide some support for ppc64le and thus the transition shouldn't be too painful from a port-the-distro perspective. RISC-V also will have similar support once it's merged into the mainline kernel and also once distributions have significant confidence to spin up some QEMU build images for RISC-V.
> I'm willing to pay more for processors that aren't thus afflicted. Is anyone at AMD, Intel et al listening?
I am inclined to believe that the reason is economic rather than them just being evil (that doesn't mean that it's not a horrible misfeature that mistreats users, I just don't think that the inclusion of ME on consumer hardware was an intentional decision). Intel ME is "required" for enterprises because sysadmins want to be able to control all of the machines they provide their employees (you can have varied opinions on whether that's ethically acceptable, but that's the reason).
Given that consumer hardware generally comes from the enterprise world after it has dropped in value, I would not be surprised if Intel ME was left in consumer CPUs simply because it was cheaper than removing it. There's also the (weaker) argument that an enterprise should be able to use Intel ME on a BYO-device system, but that strikes me as unethical.
You might be willing to pay extra for Intel ME-less CPUs, but have you seen what the bill is for a full tape-out? There needs to be significant market demand for something like that.
Of course, if you could do that you could probably compromise browser cookies anyway.
I'm not sure we're discussing the same threat model here. If you're worried about long-term compromise then that race window is a much smaller concern than the fact that having a TOTP code makes it so that an attacker can't just keylog you and get the password at a later time.
Agreeing on threat models is the first step in any discussion about security. Does your threat model include being so badly owned that a keylogger on your machine can exfiltrate data so quickly that someone can replay your login session? Is that a reasonable threat model? Is it helpful to require that to be solved or otherwise not be considered good enough?
In your personal life just leave your microphoned laptops/phones in a box in the room next door. Two birds, one stone: less time spent behind a screen unless you need it, and your tinfoil-hat friends feel safer!
Do TCP timings and retransmissions count as difference in outputs?
but even a sysadmin at a fortune 500 company is in the dark about all that this second cpu can and can't do.
The sysadmin might not know how it works, but they do know they can control machines remotely using their Intel branded management system (or other rebranded variety). Just because they don't know how bad it is doesn't mean that's not the motivation for it.
IPMI is a similar deal. Modern servers have a secondary computer embedded in the motherboard (which have been historically _very_ insecure) because it's useful for managing servers. Intel AMT is the work-laptop version of that technology, and you can bet that most enterprises use it.
> if it was economical they would offer you to pay more for full control for it.
But they do. The entire reason why enterprise deployments of large numbers of work laptops/desktops is so expensive is because you have to pay extra for the management system that comes with it. Just because they don't remove the "backdoor" in their consumer lines doesn't mean they won't charge you through the nose to be able to administer the damn thing.
I am very anti-ME and wish that all firmware was free software, but arguing that the reason why ME is present in consumer CPUs is not for economic reasons doesn't sound right to me. The reason why the technology was developed is because the developers were not aware how unethical their actions were, and that's where the core of this problem lies.
Intel just decided to clump it all together. and it doesn't even fully address the two main corporate requests.
from their site:
"For years, coreboot has been struggling against Intel. Intel has been shown to be extremely uncooperative in general. Many coreboot developers, and companies, have tried to get Intel to cooperate; namely, releasing source code for the firmware components. Even Google, which sells millions of chromebooks (coreboot preinstalled) have been unable to persuade them.
...
Basically, all Intel hardware from year 2010 and beyond will never be supported by libreboot...."
RISC-V is a free and open instruction set architecture (ISA). People can go ahead and build open-source implementations, closed-source implementations, licensed implementations. This is very different than ARM, where you can only buy implementations from ARM, or if you happen to be one of a handful of selected companies with an ARM architectural license (which costs $$$$$), you can build your own implementation, but they still have to meet certain specifications as dictated by ARM. People can freely implement RISC-V processors, extend them, and play around with it. We think RISC-V has a big potential to unleash innovation. As a matter of fact, we believe this is the prerequisite.
SiFive has made the RTL open-sourced that went into FE310. We think this is a big deal, because other SoCs don't open-source their RTL.