This is the sort of absolutism that is so pointless.
At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.
The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.
[0] https://arstechnica.com/information-technology/2021/01/hacke...
-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....
"You" probably can. I can too. That's not the point.
What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?
That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.
Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.
Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!
If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.
I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.
Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.
"only by a nation-state"
This ignores the possibility that the company selling the solution could itself easily defeat the solution.
Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".
Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.
We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.
Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.
The comment on that Ars page is more realistic than the article.
Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.
What's with the hyping of Rust as the Holy Grail as the solution to everything not including P=NP and The Halting Problem?
Most security bugs/holes have been related to buffer [over|under]flows. Statistically speaking, it makes sense to use a language that eliminates those bugs by the mere virtue of the program compiling. Do you disagree with that?
It can however make it extremely difficult to exploit and it can make such use cases very esoteric (and easier to implement correctly).
What I can say is that parsing untrusted data in C is very risky. I can't say it is more risky than phishing for you, or more risky than anything else. I lack the context to do so.
That said, a really easy solution might be to just not do that. Just like... don't parse untrusted input in C. If that's hard for you, so be it, again I lack context. But that's my general advice - don't do it.
I also said "way, way less" not "not at all". I still think about memory safety in our Rust programs, I just don't allocate time to address it (today) specifically.
Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.
For example, in garbage collected languages the programmer does not need to think about memory management all the time, and therefore they can think more about security issues. Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand. And this can be problematic even if Rust solves every security bug in the class of (for instance) buffer overflows.
If you want secure, better use a suitable GC'ed language. If you want fast and reasonably secure, then you could use Rust.
"Don't use Rust because it is GC'd" is a take that I think basically nobody working on memory safety (either as a platform concern or as a general software engineering concern) would agree with.
I don't disagree with the premise of your post, which is that time spent on X takes away from time spent on security. I'll just say that I have not had the experience, as a professional rust engineer for a few years now, that Rust slows me down at all compared to GC'd languages. Not even a little.
In fact, I regret not choosing Rust for more of our product, because the productivity benefits are massive. Our rust code is radically more stable, better instrumented, better tested, easier to work with, etc.
https://dropbox.tech/infrastructure/extending-magic-pocket-i...
There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.
Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.
But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?
If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.
If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.
Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.
Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.
1. Rust also has other safety features that may be relevant to your interests. It is Data Race Free. If your existing safe-but-slow language offers concurrency (and it might not) it almost certainly just tells you that all bets are off if you have a Data Race, which means complicated concurrent programs exhibit mysterious hard-to-debug issues -- and that puts you off choosing concurrency unless it's a need-to-have for a project. But with Data Race Freedom this doesn't happen. Your concurrent Rust programs just have normal bugs that don't hurt your brain when you think about them, so you feel free to pick "concurrency" as a feature any time it helps.
2. The big surface area of iMessage is partly driven by Parsing Untrusted File Formats. You could decide to rewrite everything in Rust, or, more plausibly, Swift. But this is the exact problem WUFFS is intended to solve.
WUFFS is narrowly targeted at explaining safely how to parse Untrusted File Formats. It makes Rust look positively care free. You say this byte from the format is an 8-bit unsigned integer? OK. And you want to add it to this other byte that's an 8-bit unsigned integer? You need to sit down and patiently explain to WUFFS whether you understand the result should be a 16-bit unsigned integer, or whether you mean for this to wrap around modulo 256, or if you actually are promising that the sum is never greater than 255.
WUFFS isn't in the same "market" as Rust, its "Hello, world." program doesn't even print Hello, World. Because it can't. Why would parsing an Untrusted File Format ever do that? It shouldn't, so WUFFS can't. That's the philosophy iMessage or similar apps need for this problem. NSO up against WUFFS instead of whatever an intern cooked up in C last week to parse the latest "must have" format would be a very different story.
It's 26 years after Java was released. Java has largely been the main competitor to C++. I don't see C++ going away nor do I see C going away. And it's almost always a mistake to lump C and C++ developers together. There is rarely an intersection between the two.
I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.
I don’t mean this in a very critical spirit, though.
Communication is really hard - especially in a large setting where not everyone reads you in the same context, and not everyone means well.
On balance, you post was valuable to me!
Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.
I'm glad the post was of value to you. The talk is really good and I think more people should read it.
Here's the first Microsoft one: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...
And Chrome: https://www.zdnet.com/article/chrome-70-of-all-security-bugs...
A Java program can't write over the return address on the stack.
On the other hand, you could choose to think about communications in an analogous way to your code, both being subject to attack by bad actors trying to subvert your good intentions.
So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.
I also come from a coding background (albeit a long time ago) and with the help of some well meaning bosses over time eventually came to realize, that my messages could gain more influence, by reducing unnecessary attack surface. - Doesn’t mean I always get it right, even now - but I am aware and generally try hard to do just that.
I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.
Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.
Not mine. I have no plans to purchase a security key from Google. I have no threat model.
Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.
1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...
2.
https://support.google.com/googlenest/thread/14123369/what-i...
https://www.inc.com/jason-aten/google-is-absolutely-listenin...
https://www.consumerwatchdog.org/blog/people-dont-trust-goog...
https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...
https://www.theguardian.com/technology/2020/jan/03/google-ex...
https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...
But not too long ago, before SaaS, social media, etc, displaced phpBB, WordPress, and other open source platforms, things like SQL injection reigned supreme even in the reported data. Back then CVEs more closely represented the state of deployed, forward-facing software. But now the bulk of this software is proprietary, bespoke, and opaque--literally and to vulnerability data collection and analysis.
How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows? Some, like Stuxnet, but they're considered exceptionally complex; and even in Stuxnet buffer overflows were just one of several different classes of exploits chained together.
Bad attackers are usually pursuing sensitive, confidential data. Access to most data is protected by often poorly written logic in otherwise memory-safe languages.
(I realize that racing threads can cause logic based security issues. I've never seen a traditional memory exploit from on racing goroutines though.)
Perhaps not with building properties, but very small errors can cause catastrophic failure.
One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.
https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...
In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.
Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.
It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.
I have another neat trick to avoid races. Just write single threaded programs. Whenever you think you need another thread, you either don't need it, or you need another program.
Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?
I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...
Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?
Apparently in Golang, you can achieve memory unsafety through data races: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... (though I'm not sure if a workaround has been added to prevent memory unsafety).
Ten years is about the time since C++ 11. I may be wrong, but I do not regret my estimate.
There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.
Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.
Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.
If you have a race which definitely only touches some simple value like an int and nothing more complicated then Go may be able to promise your problem isn't more widespread - that value is ruined, you can't trust that it makes any sense (now, in the future, or previously), but everything else remains on the up-and-up. However, the moment something complicated is touched by a race, you lose, your program has no defined meaning whatsoever.
It really depends on the target. If you’re attacking a website, then sure, you’re more likely to find vulnerability classes like XSS that can exist in memory-safe code. When you’re talking about client-side exploits like the ones used by NSO Group, though, almost all of them use memory corruption vulnerabilities of some sort. (That doesn’t only include buffer overflows; use-after-free vulnerabilities seem to be the most common ones these days.)
The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.
Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.
We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.
I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.
No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).
IT safety = construction safety. What kind of cracks/bumps does your bridge/building have, can it handle increase in car volume over time, lots of new appliances put extra load on the foundation etc. IT safety is very similar in that way.
IT security = physical infrastructure security. Is your construction safe from active malicious attacks/vandalism? Generally we give up on vandalism from a physical security perspective in cities - spray paint tagging is pretty much everywhere. Similarly, crime is generally a problem that's not solvable & we try to manage. There's also large scale terrorist attacks that can & do happen from time to time.
There are of course many nuanced differences because no analogy is perfect, but I think the main tangible difference is that one is in the physical space while the other is in the virtual space. Virtual space doesn't operate the same way because the limits are different. Attackers can easily maintain anonymity, attackers can replicate an attack easily without additional effort/cost on their part, attackers can purchase "blueprints" for an attack that are basically the same thing as the attack itself, attacks can be carried out at a distance, & there are many strong financial motives for carrying out the attack. The financial motive is particularly important because it funds the every growing arms raise between offence & defense. In the physical space this kind of race is only visible in nation states whereas in the virtual space both nation states & private actors participate in this race.
Similarly, that's why IT development is a bit different from construction. Changing a blueprint in virtual space is nearly identical from changing the actual "building" itself & the cost is several orders of magnitude lower than it would be in physical space. Larger software projects are cheaper because we can build reusable components that have tests that ensure certain behaviors of the code & then we rerun them in various environments to make sure our assumptions still hold. We can also more easily simulate behavior in the real world before we actually ship to production. In the physical space you have to do that testing upfront to qualify a part. Then if you need a new part, you're sharing less of the design whereas in virtual space you can share largely the same design (or even the exact same design) across very different environments. & there's no simulation - you build & patch, but you generally don't change your foundation once you've built half the building.
But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.
I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.
Beyond that, I've already addressed phishing at our company, it just didn't seem worth pointing out.
Well, I'm the CEO lol so we have an advantage there.
> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.
Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.
edit: Wait, per month? No no.
> We don't need better crypto.
FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.
But what does this have to do with the FIDO authenticator?
At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".
None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.
I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.
The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.
I have to admire the practicality of the approach they've been taking.
Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.
Additionally the approach they've taken to allocators means that you can use special allocators for testing that can perform even more checks, including leak detection.
I think it's a great idea and a really interesting approach but it's definitely not as rigorous as what Rust provides.
If your program loses track of which file handles are open, which database transactions are committed, which network sockets are connected, GC does not help you at all for those resources, when you are low on heap the system automatically looks for some garbage to get rid of, but when you are low on network sockets, the best it could try is hope that cleaning up garbage disconnects some of them for you.
Rust's lifetime tracking doesn't care why we are tracking the lifetime of each object. Maybe it just uses heap memory, but maybe it's a database transaction or a network socket. Either way though, at lifetime expiry it gets dropped, and that's where the resource gets cleaned up.
There are objects where that isn't good enough, but the vast majority of cases, and far more than under a GC, are solved by Rust's Drop trait.
It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident
>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.
>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".
>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.
The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.
Could you say why Java is not susceptible to ROP?
The safety vs security distinction made above is fundamental. Developers are faced with solving an entire class of problems that is barely addressed by the rest of the engineering disciplines.
How do you imagine this would work?
The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.
I think you're imagining a lot of moving parts to the "solution" that don't exist.
I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.
At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.
And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.
So it's not like Apple isn't trying to improve things.
Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).
There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.
[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...
And how do I enroll all my employees into GitHub/GitLab?
And how do I recover when a YubiKey gets lost?
And how do I ...
Sure, I can do YubiKeys for myself with some amount of pain and a reasonable amount of money.
Once I start rolling secure access out to everybody in the company, suddenly it sucks. And someone spends all their time doing internal customer support for all the edge cases that nobody ever thinks about. This is fine if I have 10,000 employees and a huge IT staff--this is not so fine if I've got a couple dozen employees and no real IT staff.
That's what people like okta and auth0 (now bought by okta) charge so bloody much for. And why everybody basically defaults to Microsoft as an Identity Provider. etc.
Side note: Yes, I do hand YubiKeys out as trios--main use, backup use (you lost or destroyed your main one), and emergency use (oops--something is really wrong and the other two aren't working). And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.
Remotely, anonymously, at virtually no risk to themselves.
It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.
Anything so evangelized is met with strong skepticism here.
- safety is "the system cannot harm the environment"
- security is the inverse: "the environment cannot harm the system"
To me, your distinction has to do with the particular attacker model - both sides are security (under these definitions).
But there's the problem: Testing can't and won't cover all inputs that a malicious attacker will try [1]. Now you've tested all inputs you can think of with runtime checks enabled, you release your software without runtime checks, and you can be sure that some hacker will find a way to exploit a memory bug in your code.
[1] Except for very thorough fuzzing. Maybe. If you're lucky. But probably not.
> Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera)
I am not a mechanical engineer, but none of these examples look like smooth functions to me. I would expect that an unexpectedly high wind can cause your structure to move in way that is not covered by your model at all, at which point it could just show a sudden non-linear response to the event.
This isn't unique to SMS, obviously, since the same attack scenario works against e.g. a TOTP from a phone app.
I would have liked to see a secure QNX as a mainstream OS. The microkernel is about 60Kb, and it offers a POSIX API. All drivers, file systems, networking, etc. are in user space. You pay about 10%-20% overhead for message passing. You get some of that back because you have good message passing available, instead of using HTTP for interprocess communication.
https://www.cvedetails.com/vulnerability-list/vendor_id-1902...
What if the bug is in std?
What if I use a bugged Vec::from_iter?
What if I use the bugged zip implementation from std?
You'll probably blame unsafe functions, but those unsafe functions were in std, written by the people who know Rust better than anyone.
Imagine what you and me could do writing unsafe.
Imagine trusting a 3rd party library...
I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.
So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.
- Safety is a PvE game[0] - your system gets "attacked" by non-sentient factors, like weather, animals, or people having an accident. The strength of an attack can be estimated as a distribution, and that estimate remains fixed (or at least changes predictably) over time. Floods don't get monotonically stronger over the years[1], animals don't grow razor-sharp titanium teeth, accidents don't become more peculiar over time.
- Security is a PvP game - your system is being attacked by other sentient beings, capable of both carefully planning and making decisions on the fly. The strength of the attack is unbounded, and roughly proportional to how much the attacker could gain from breaching your system. The set of attackers, the revenue[2] from an attack, the cost of performing it - all change over time, and you don't control it.
These two types of threats call for a completely different approach.
Most physical engineering systems are predominantly concerned with safety - with PvE scenarios. Most software systems connected to the Internet are primarily concerned with security - PvP. A PvE scenario in software engineering is ensuring your intern can't accidentally delete the production database, or that you don't get state-changing API requests indexed by web crawlers, or that an operator clicking the mouse wrong won't irradiate their patient.
--
[0] - PvE = "Player vs Environment"; PvP = "Player vs Player".
[1] - Climate change notwithstanding; see: estimate changing predictably.
[2] - Broadly understood. It may not be about the money, but it can be still easily approximated in dollars.
Not disagreeing, just mentioning.
Naturally, she was wearing gloves.
Seeing me, she grabbed the dustpan, threw away her sweepings, put the broom away, and was prepared to now serve me...
Still wearing the same gloves. Apparently magic gloves, for she was confused when I asked her to change them. She'd touched the broom, the dustpan, the floor, stuff in the dustpan, and the garbage. All within 20 seconds of me seeing her.
Proper procedure, understanding processes, are far more effective than a misused tool.
Is rust mildly better than some languages? Maybe.
But it is not a balm for all issues, and as you say, replacing very well maintained codebases might result in suboptimal outcomes.
That's true, but this is one of the cases where obtaining the last 5-10% of clarify might require 90% of the total effort.
Now whether one actually already has plucked all the low-hanging fruit in their own communication and if it's already good -- that's a separate discussion.
In software, you can spec the behavior of your program. And then it is possible to code to that exact spec. It is also possible, with encryption and stuff, to write specs that are safe even when malicious parties have control over certain parts.
This is not to say that writing such specs is easy, nor that coding to an exact spec is easy. Heck, I would even doubt that it is possible to do either thing consistently. My point is, the challenge is a lot harder. But the tools available are a lot stronger.
Its not a lost cause just because the challenge is so much harder.
Rust Lang strives for safety and safety is no 1 priority. Regarding the unsafe in std please read the source code just to know how much careful they are with the implementation. They only use unsafe for performance and even with unsafe rust, it doesn't provide too much freedom tbh.
The 3rd party thing you are referring etc sounds childish. They are not the rust lang fault tbh. If you don't trust them don't use. It is as simple as that.
So I think telling people rust program that doesn't have unsafe will not have memory safety bugs. Exceptions to this statement do occurs but are rare.
Edit:thinking about it, without man in the middle the phisher can login, but cannot make transfers (assuming the SMS shows what transfer is beiing authorized). Still bad enough.
Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.
What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.
Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.
These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.
"That's all I know."
It is a different story in languages meant to run untrusted code of course.
For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...
The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.
> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising
Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.
Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.
So alas, even if on every previous transaction, Grannie was told, "Please read the SMS carefully and only fill out the code if the transfer is correctly described", she may not be suspicious when this time the bank (actually a phishing site) explains, "Due to a technical fault, the SMS may indicate that you are authorising a transfer. Please disregard that". Oops.
† e.g. some modern "refund" scams involve a step where the poor user believes they "slipped" and entered a larger number than they meant to, but actually the bad guys made the number bigger, the user is less suspicious of the rest of the transaction because they believe their agency set the wheels in motion.
Preventing buffer overruns require language-level support.
If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.
However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:
For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.
Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.
But personal attacks are not cool. Keep it civil, please.
While the first part of the sentence is mostly true (although the intention is to make safe Zig memory safe, and unsafe Rust isn't safe either), the second isn't. The goal isn't to use a safe language, but to use a language that best reduces certain problems. The claim that the best way to reduce memory safety problems is by completely eliminating all of them regardless of type and regardless of cost is neither established nor sensical;. Zig completely eliminates overflows, and, in exchange for the cost of eliminating use-after-free, makes detecting and correcting it, and other problems, easier.
For WebAuthn (and its predecessor U2F) that "non-trivial" amount seems to be precisely AWS. The specification tells them to allow multiple devices to be enrolled but they don't do it.
But this shouldn't be called “memory safety”.
Earlier you gave the example of Facebook harvesting people's phone numbers. That's not just data that's information. But a Yubikey doesn't know your phone number, how much you weigh, where you live, what type of beer you drink... no information at all.
The genius thing about the FIDO Security Key design is figuring out how to make "Are you still you?" a question we can answer. Notice that it can't answer a question like "Who is this?". Your Yubikey has no idea that you're o8r3oFTZPE. But it does know it is still itself and it can prove that when prompted to do so.
And you might think, "Aha, but it can track me". Nope. It's a passive object unless activated, and it also doesn't have any coherent identity of its own, so sites can't even compare notes on who enrolled to discover that the same Yubikey was used. Your Yubikey can tell when it's being asked if it is still itself, but it needs a secret to do that and nobody else has the secret. All they can do is ask that narrow question, "Are you still you?".
Which of course is very narrowly the exact authentication problem we wanted to solve.
A Java program, by construction, cannot write to memory regions not allocated on the stack or pointed to by a field of an object constructed with "new". Runtime checks prevent ordinary sorts of problems and a careful memory model prevents fun with concurrency errors. There are interesting attacks against the Java Security Manager - but this is independent of memory safety.
We are on a thread about "a case against security nihilism".
1. Not all vulnerabilities are memory safety vulnerabilities. The idea that adopting memory safe languages will prevent all vulns is not only a strawman, but empirically incorrect since we've had memory safe languages for many decades.
2. It is the case that a tremendously large number of vulns are caused by memory safety errors and that transitioning away from memory-unsafe languages will be a large win for industry safety. 'unsafe' is a limitation of Rust, but compared to the monstrous gaping maw of eldritch horror that is C and C++, it is small potatoes.
3. You are going to struggle to write real programs without ever using third party code.
If the solution to the "problem" is giving increasingly more personal information to a tech company, that's not a great solution, IMO. Arguably, from the user's perspective, it's creating a new problem.
Most users are not going to purchase YubiKeys. It's not a matter of whether I use one, what I am concerned about is what other users are being coaxed into doing.
There are many problems with "authentication methods" but the one I'm referring to is giving escalating amounts of personal information to tech companies, even if it's under the guise "for the purpose of authentication" or argued to be a fair exchange for "free services". Obviously tech companies love "authenticating" users as it signals "real" ad targets.
The "tech" industry is riddled with conflicts of interest. That is a problem they are not even attempting to solve. Perhaps regulation is going to solve it for them.
Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.
Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.
The Mozilla case study is not a real world study. It simply looks at the types of bugs that existed and says "I promise these wouldn't have existed if we had used Rust". Would Rust have introduced new bugs? Would there be an additional cost to using Rust? We don't know and probably never will. What we care about is preventing real world damage. Does Rust prevent real world damage? We have no idea.
Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.
All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.
What I'm saying is that truth is a matter of debate. We believe lots of things based on evidence much less rigorous than a formal proof in many cases - like most modern legal systems, which rely on various types of evidence, and then a jury that must form a consensus.
So saying "there is no evidence" is sort of missing the point. Safe Rust does not have memory safety issues, barring compiler bugs, therefor common sense as well as experience with other languages (Java, C#, etc), would show that that memory safety issues are likely to be far less common. Maybe that isn't the evidence that you're after, but I find that compelling.
To me, the question of "does rust improve upon memory safety relative to C/C++" is obvious to the point that it really doesn't require justification, but that's just me.
I could try to find more evidence, but I'm not sure what would convince you. There's people fuzzing rust code and finding far fewer relevant vulns - but you could find that that's not compelling, or whatever.
On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.
As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.
Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.
It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.
A language and a memory access model are no panacea. 10 years is like the day after tomorrow in many industries.
Sure it was, if you didn't want this problem you'd be fine with remaining anonymous and receiving only services that can be granted anonymously. I understand reading Hacker News doesn't require an account, and yet you've got one and are writing replies. So yes, you created the problem.
Now, Hacker News went with 1970s "password" authentication. Maybe you're good at memorising a separate long random password for each site, and so this doesn't really leak any information it's just data. Lots of users seem to provide the names of pets, favourite sports teams, cultural icons, it's a bit of a mish-mash but certainly information of a sort.
In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?".
There are runtime checks around class structure that ensure that a field load cannot actually read some unexpected portion of memory.
There are runtime checks that ensure that you cannot read through a field on a deallocated object, even when using weakreference and therefore triggering a GC even while the program has access to that field.
There are runtime checks around array reads that ensure that you cannot access memory outside of the allocated bounds of the array.
I have no idea why "susceptible to something like ROP" is especially relevant here. ROP is not the same as "writing over the return address" ROP is a technique you use to get around non-executable data sections and happens after you abuse some memory safety error to write over the return address (or otherwise control a jump). It means "constructing an exploit via repeated jumps to already existing code rather than jumping into code written by the attacker".
But just for the record, Java does have security monitoring of the call stack that can ensure that you cannot return to a function that isn't on the call stack so even if you could change the return target the runtime can still detect this.
The brilliant thing about RAAI style resource management is that library authors can define what happens at the end of an object's lifetime and the Rust compiler enforces the use of lifetimes.
I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.
I think people don't realize just how much modern C++ has changed.
Regardless of intent, it seems very much in the spirit of trying to solve a complex problem by adding more complexity, a common theme I see in "tech".
There is nothing inherently wrong with the idea of "multi-factor authentication" (as I recall some customer-facing organisations were using physical tokens long before "Web 2.0") however in practice this concept is being (ab)used by web-based "tech" companies whose businesses rely on mining personal data. The fortuitous result for them being intake of more data/information relating to the lives of users, the obvious examples being email addresses and mobile phone numbers.
1. This is not an issue I came up with in a vacuum. It is shared by others. I once heard an "expert" interviewed on the subject of privacy describe exactly this issue.
And yet here's a thread in which you did exactly that.
No, I am responding to the above assertion that I have insisted security keys give esacalating amounts of personal information to "tech" companies.
This is incorrect. Most users do not have physical security tokens. But "tech" companies promote authentication without using physical tokens: 2FA using a mobile number.
What I am "insisting" is that "two-factor authentication" as promoted by tech campanies ("give us your mobile number because ...") has resulted in giving increasing amounts of personal information to tech companies. It has been misused; Facebook and Twitter were both caught using phone numbers for advertising purposes. There was recently a massive leak of something like 550 million Facebook accounts, many including telephone numbers. How many of those numbers were submitted to Facebook under the belief they were needed for "authentication" and "security". I am also suggesting that this "multi-factor authentication" could potentially increase to more than two factors. Thus, users would be giving increasing amounts of personal information to "tech" companies "for the purposes of authentication". That creates additional risk and, as we have seen, the information has in fact been misused. This is not an idea I came up with; others have stated it publicly.
Concurrent Pascal or Singularity also fit the bill, with actual operating systems being written in it.
As for tooling, things like valgrind provide an excellent mechanism for ensuring that the program was memory safe, even in it's "unsafe" areas or when calling into external libraries (something that rust can't provide without similar tools anyway).
My broader point is that safety is more than just a compiler saying "ok you did it", though that certainly helps. I would trust well written safety focused C++ over Rust. On the other hand, I would trust randomly written Rust over C++. Rust is good for raising the lower end of the bar, but not really the top of it unless paired with a culture and ecosystem of safety focus around the language.
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.
I'm sure you really are worried about how "Facebook are bad", and you feel like you need to insert that into many conversations about other things, but "Facebook are bad" is irrelevant here.
You made a bogus claim about Security Keys. These bogus claims help to validate people's feeling that they're helpless and, eh, they might as well put up with "Facebook are bad" because evidently there isn't anything they can really do about it.
So your problem is, which is more important, to take every opportunity to surface the message you care about "Facebook are bad" in contexts where it wasn't actually relevant, or to accept that hey, actually you're wrong about a lot of things, and some of those things actually reduce the threat from Facebook ? I can't help you make that choice.
He goes on to discuss the expansion of "trust boundaries".
Big Tech: Use our computers, please!
There isn't even specific language support necessary, it's on the library level.
Since we're in a thread about security this is a crucial difference. I'm sure Amy, Bob, Charlie and Deb were able to use the new version of Puppy Simulator successfully for hours without any sign of unsafety in the sanitizer. Good to go. Unfortunately, the program was unsafe and Evil Emma had no problem finding a way to attack it. Amy, Bob, Charlie and Deb had no reason to try naming a Puppy in Puppy Simulator with 256 NUL characters, so, they didn't, but Emma did and now she's got Administrator rights on your server. Oops.
In contrast safe Rust is actually safe. Not just "It was safe in my tests" but it's just safe.
Even though it might seem like this doesn't buy you anything when of course fundamental stuff must use unsafe somewhere, the safe/unsafe boundary does end up buying you something by clearly delineating responsibility.
For example, sometimes in the Rust community you will see developers saying they had to use unsafe because, alas, the stupid compiler won't optimise the safe version of their code properly. For example it has a stupid bounds check they don't need so they used "unsafe" to avoid that. But surprisingly often, another programmer looks at their "clever" use of unsafe, and actually they did need that bounds check they got rid of, their code is unsafe for some parameters.
For example just like the C++ standard vector, Rust's Vec is a wasteful solution for a dozen integers or whatever, it does a heap allocation, it has all this logic for growing and shrinking - I don't need that for a dozen integers. There are at least two Rust "small vector" replacements. One of them makes liberal use of "unsafe" arguing that it is needed to go a little faster. The other is entirely safe. Guess which one has had numerous safety bugs.... right.
Over in the C++ world, if you do this sort of thing, the developer comes back saying duh, of course my function will cause mayhem if you give it unreasonable parameters, that was your fault - and maybe they update the documentation or maybe they don't bother. But in Rust we've got this nice clean line in the sand, that function is unsafe, if you can't do better label it "unsafe" so that it can't be called from safe code.
This discipline doesn't exist in C++. The spirit is willing but the flesh (well, the language syntax in this case) is too weak. Everything is always potentially unsafe and you are never more than one mistake from disaster.
And this is where the argument breaks down for me. The C++ vector class can be just as safe if people are disciplined. And as you even described, people in rust can write "unsafe" and do whatever they want anyway to introduce bugs.
The language doesn't really seem to matter at the end of the day from what you are telling me (and that's my main argument).
With the right template libraries (including many parts of the modern C++ STL) you can get the same warnings you can from Rust. One just makes you chant "unsafe" to get around it. But a code review should tell off any developer doing something unsafe in either language. C++ with only "safe" templates is just as "actually safe" as rust is (except for with a better recovery solution than panics!).