C. 30 years of buffer overflows.
Though one (perhaps nit-picky) point I'd like to make is that these dictators are not dumb. They are incredibly intelligent. They themselves are probably not hackers, but they understand people and power. They are going do what they can to get what they want. We can't ignore the factor they play in creating these problems, and we need to take it just as seriously as we would a technical security exploit.
> It’s the scale, stupid
This should 100% be the focus, not how truly amicable Apple's efforts are to improve security. Security nihilism is entirely about scale, and understanding your place in the digital pecking order. The only way to be 'secure' in that sense is to directly limit the amount of personal information that the surrounding world has on you: in most first-world countries, it's impossible to escape this. Insurance companies know your medical history before you even apply for their plan, your employer will eventually learn about 80% of your lifestyle, and the internet will slowly sap the rest of the details. In a world where copying is free, it's undeniable that digital security is a losing game.
Here's a thought: instead of addressing security nihilism in the consumer, why don't you highlight this issue in companies? There's currently no incentive to hack your phone unless it has valuable information that can't be found anywhere else: in which case, you have more of a logistics issue than a security one. Meanwhile, ransomware and social-engineering attacks are at an all-time high, yet our security researchers are taking their time to hash out exactly how mad we deserve to be at Apple for their exploit-of-the-week. If this is the kind of attitude the best-of-the-best have, it's no wonder we're the largest target for cyberattacks in the world.
I may misunderstand you but this is privacy, not security. The 2 are not completely separated, but that’s another issue.
Most dictators are not very intelligent. Just like Donald Trump is not very intelligent.
Cunning and with social smarts would be apt. These guys really know how to play people off each other, and manipulate, like, really well.
This is the purpose of governments; it is why we keep them around. There is no really defensible reason why the chemical, biological, radiological and nuclear industries are heavily regulated, but "cyber" isn't.
It's fair to criticize Apple. But you can't reasonably argue that they DGAF.
I think we all understand that the medium-term answer to this is replacing C with memory-safe languages; it turns out, this was the real Y2K problem. But there's no clear way for regulations to address that effectively; assure yourself, the major vendors are all pushing forward with memory safe software.
The only tractable way to deal with cyber security is to implement systems that are secure by default. That means working on hard problems in cryptography, hardware, and operating systems.
The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.
I would never think to spend a million dollars on securing my home network (including other non-dollar costs like inconveniencing myself). Let's suppose that spending $1M would force the US NSA to spend $10M to hack into my home network. The people making that decision aren't spending $10M of their own money; they're spending $10M of the government's money. The NSA doesn't care about $10M in the same way that I care about $1M.
As a result, securing yourself even against a dedicated attacker like Israel's NSO Group could cost way, way more than a simple budget analysis would imply. I'd have to make the costs of hacking me so high that someone at NSO would say "wait a minute, even we can't afford that!"
So, sure, "good enough" security is possible in principle, I think it's fair to say "You probably can't afford good-enough security against state-level actors."
The core of the problem is complexity. Our modern computing stack can be broadly described as:
- Complexity to add features. - Complexity to add performance. - Complexity to solve problems with the features. - Complexity to solve problems created from the performance complexity. - Complexity added to solve the issues the previous complexity created.
And this has been iterating over, and over, and over... and over. The code gets more complex, so the processors have to be faster, which adds side channel issues, so the processors get more complex to solve that, as does the software, hurting performance, and around you go again.
At no point does anyone in the tech industry seem to step back and say, "Wait. What if we simplify instead?" Delete code. Delete features. I would rather have an iPhone without iMessage zero click remote exploits than one with animated cartoons based on me sticking my tongue out and waggling my eyebrows, to pick on a particularly complex feature.
I've made a habit of trying to run as much as I can on low power computers, simply to see how it works, and ideally help figure out the choke points. Chat has gotten comically absurd over the years, so I'll pick on it as an example of what seems, to me, to be needless complexity.
Decades ago, I could chat with other people via AIM, Yahoo, MSN, IRC, etc. Those clients were thin, light, and ran on a single core 486 without anything that I recall as being performance issues.
Today, Google Chat (having replaced Hangouts, which was its own bloated pig in some ways) struggles to keep up with typing on a quad core, 1.5GHz ARM system (Pi 4). It pulls down nearly 15MB of resources - or roughly 30% of a Windows 95 install. To chat with someone person to person, in the same way AIM did decades ago. I'm more used to lagged typing in 2021 than I was in 1998.
Yes, it's got some new features, and... I'm sure someone could tell me what they are, but in terms of sending text back and forth to people across the internet, along with images, it's fundamentally doing the exact same thing that I did 20 years ago, just using massively more resources, which means there are massively more places for vulnerabilities, exploits, bugs, etc, to hide. Does it have to be that huge? No idea, I didn't write it. But it's larger and slower than Hangouts, to accomplish, as far as I'm concerned, the same things.
We can't just keep piling complexity on top of complexity forever and expect things to work out.
Now, if I wanted to do something like IRC, which is substantially unchanged from the 90s, I can use a lightweight native client that uses basically no CPU and almost no memory to accomplish this, on an old Pi3 that has an in-order CPU with no speculation, and can run a rather stripped down kernel, no browser, etc. That's going to be a lot harder to find bugs in than the modern bloated code that is most of modern computing.
But nobody gets promoted for stripping out code and making things smaller these days, it seems.
As long as the focus is on adding features, that require more performance, we're simply not going to get ahead of the security bugs. And, if everyone writing the code has decided that memojis are more important than security iMessage against remote zero click exploits, well... OK. But the lives of journalists are the collateral damage of those decisions.
These days, I regularly find myself wondering why I bother with computers at all outside work. I'd free up a ton of "overhead maintenance time" I spend maintaining computers, and that's before I get into the fact that even with aggressive attempts to tamp down privacy invasions, I'm sure lots of my data is happily being aggregated for... whatever it is people do with that, send ads I block, I suppose.
Governments want access to spy on people. Apple wants to market and sell a “secure” mobile device.
In a way NSO provides apple with a perfect out. They can legally claim they are secure platform and do not work with bad actors or foreign governments to “spy”.
Hear no evil see no evil. NSO ability to penetrate iOS gives powerful governments what they want and in a way may keep “pressure” off Apple in providing official back door access.
I mean sure technical solutions are available and do help, but to only look at the technical side and ignore the original issue seems like a mistake.
(I’m reminded of Google’s responses to the Snowden leak.)
This is the sort of absolutism that is so pointless.
At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.
The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.
[0] https://arstechnica.com/information-technology/2021/01/hacke...
Second of all if you can't push the costs high enough then it becomes time to limit the cash budget of state level actors. Which is hardly without precedent.
For some reason you seem to only be looking at this as a technology problem, while at the core it is far more political. Sure technology might help, but that's the raison d'etre of technology.
If all iMessage allowed were ASCII text strings, do you think it would have nearly the same attack surface as it does now, allowing all the various things it supports (including, if I recall properly, some tap based patterns that end up on the watch)?
In a very real sense, complexity (which is what features are) is at odds with security. You increase the attack surface, and you increase the number of pieces you can put together into weird ways that were never intended, but still work and get the attacker something they want.
If there were some toggle to disable parsing everything but ASCII text and images in iMessage, I'd turn it on in a heartbeat.
No. We don't operate that way, and we don't want to.
But for us to not operate that way in cyberspace, we need crackers (to use the officially approved term) to be at least as likely to be caught (and prosecuted) as murderers are. That's a hard problem that we should be working on.
(And, yes, we need to work on the other problems as well.)
It's true that if you constrain the problems enough, ratcheting them down to approximately what we were doing with the Internet in 1994 when we were getting access to it from X.25 gateways, you can plausibly ship secure software --- with the engineering budgets of 2021 (we sure as shit couldn't do it in 1994). The problem is that there is no market to support those engineering budgets for the feature set we had in 1994.
It's fun to make fun of old people in ties asking (to us) stupid questions about technology in front of cameras, but at the end of the day, it's a crucial step in actually getting something done about all this.
For all we know it also included cryptographers and security researchers. Unfortunately, the list hasn't been published-- so we only know what the journalists who had access to it cared to look up.
Regulated Cybersecurity: Must include all mandatory government backdoors.
-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....
Apple could do more spying (excuse me, "telemetry") "as much as possible" in addition to NSO... because it would make the the competitor's spying more expensive.
This could be a unilateral decision to be made by Apple without input from users, as usual.
Any commercial benefits to Apple due to the incresed data collection would be purely incidental, of course.
Apple and NSO may have different ways of making money, but they both use (silent) data collection from computer users to help them.
Sure, in that kind of event, an org might be more concerned with flat out survival. But you never know if you'll be roadkill. And once that capability is developed, there is no telling how some state-level actors are connected to black markets and hackers who are happy to have more ransomware targets. Some states are hurting for cash.
I think it is wholey reasonable to work on both preventive and punitive approaches. For online crimes, jurisdictional issues are major hurdles for the punitive approach.
On the one hand, sure, make it too expensive to do this. On the other hand, how much more expensive is too expensive? When the first SHA1 collision attack was found, it was considered a problem, and SHA1 was declared unsuitable for security purposes, but now it's cheap.
This wouldn't do anything to stop companies who base themselves in places like Russia. It wouldn't even really do anything to stop those who base themselves in the Seychelles. But, you want to base yourself in a real bona-fide country, like the USA or France or Israel or Singapore? Then you should have to play by some rules.
That's just about all I use for messages. Some images, but it's not critical. And if I had the option to turn off "all advanced gizamawhatchit parsing" in iMessage to reduce the attack surface, I absolutely would - and you can bet any journalist in a hostile country would like the option as well.
The whole "zero click" thing is the concerning bit - if I can remotely compromise someone's phone with just their phone # or email address, well... that's kind of a big deal, and this is hardly the first time it's been the case for iMessage.
If software complexity is at a point that it's considered unreasonable to have a secure device, then it's long past time to put an icepick through the phones and simply stop using them. Though, as I noted above, I feel this way about most of modern computing these days.
The issue here is that we aren't saying anything about the real problem. You can radically scope software down. That will indeed make it more secure. But you will stop making money. When you stop making money, you will stop being able to afford the developers who can write secure software (the track record on messaging software written by amateurs for love is not great). Now we're back where we're started, just with shittier software.
It's a hard problem. You aren't wrong to observe it; it's just that you haven't gotten us an inch closer to a solution.
Microsoft figured it out. Apple can do it, too.
"You" probably can. I can too. That's not the point.
What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?
That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.
Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.
Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!
If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.
However, the article falls right into the next failed model of considering everything in terms of relative security. We should make things “better”, we should make things “harder”, but those terms mean very little. 1% better is “better”. Making a broken hashing function take 2x as long to break makes things “harder”, but it does not make things more secure since it is already hopelessly inadequate. The problem with considering things only in relative terms to existing solutions is that it ignores defining the problem, and more importantly, it does not tell you if you solved your problem.
The correct model is the one used by engineering disciplines, specifying objective, quantifiable standards for what is adequate and then verifying the solution passes those standards. Because if you do not define what is adequate, how do you know if you have even achieved the bare minimum of what you need and how far your solution may be from that.
For instance, consider the same NSO case as the article. Did Apple do an adequate job, what is an adequate job, and how far away are they?
Well, let us assume that the average duration of surveillance for the 50,000 phones was 1 year per phone. Now what is a good level of protection against that kind of surveillance? I think a reasonable standard is making it so the phone is not the easiest way to surveil a person for that length of time, it is cheaper to do it the old fashioned way, so the phone does not make you more vulnerable on average. So, how much does it cost to surveil a person and listen in on their conversations for a year the old fashioned way? 1k, 10k, 100k? If we assume 10k, then the level of security needed to protect against NSO type threats and to adequately protect against surveillance is $500M.
So, how far away is Apple from that? Well, Zerodium pays $1.5M per iMessage zero click [1]. If we assume they burned 10 of them, infecting a mere 5k per with a trivially wormable complete compromise, that would amount to ~15M at market price. Adding in the rest of the work, it would maybe cost $20M all together worst case. So, if you agree with this analysis (if you do not feel free to plug in your own estimates), then Apple has achieved ~4% of the necessary level and would need to improve processes by 2,500% to achieve adequate security against this type of attack. I think that should make it clear why things are so bad. “Best in class” security needs to improve by over 10x to become adequate. It should be no wonder these systems are so defenseless.
I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.
Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.
That reminds me somehow of an old expression: If you like apples, you might pay a dollar for one, and if you really like apples you might pay $10 for one, but there's one price you'll never pay, no matter how much you like them, and that's two apples.
They also have something else most people don't have: time. Nation-states and actors at that level of sophistication can devote years to their goals. This is reflected in the acronym APT, or Advanced Persistent Threat. It's not that just once they have hacked you they'll stick around until they are detected or have everything they need, it's also that they'll keep trying, playing the long game, waiting for their target to get tired or make a mistake, and fail to keep up with advancing sophistication?
In your example, you spend $1M on your home network, but do you keep spending the money, month after month, year after year, to prevent bitrot? Equifax failed to update Struts to address a known vulnerability, not just because of cost but also time. It's cost around $2billion so far, and the final cost might never really be known.
Then you are an actual target for state level actors.
Or can they only monitor SMS/iMessages with this entry point?
We should do things that have the side effect of making exploits more expensive, by making them more intrinsically scarce. The scarcer novel exploits are, the safer we all. But we should be careful about doing things that simply make them cost more. My working theory is that the more important driver at NSA isn't the mission as stated; like most big organizations, the real driver is probably just "increasing NSA's budget".
That's a bit naive. Governments want surveillance technology, and will pay for it. The tools will exist, and like backdoors and keys in escrow, they will leak, or be leaked.
The reason why all those other industries are regulated as much as they are is because governments don't need those types weapons they way they need information. It's messy and somewhat distasteful to overthrow an enemy in war, but undermining a government, through surveillance, disinformation, propaganda, until it collapses and is replaced by a more compliant government is the bread-and-butter of world affairs.
"only by a nation-state"
This ignores the possibility that the company selling the solution could itself easily defeat the solution.
Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".
Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.
We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.
Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.
The comment on that Ars page is more realistic than the article.
Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.
We can’t achieve perfect security (there’s no such thing). What we can achieve is raising the bar for attackers. Simple things like using memory-safe languages for handling untrusted inputs, least-privilege design, defense in depth, etc.
That means our society, our governments, our economic systems are security holes. Everyone saying the Bad Thing would happen did so by looking, not at technology, but at how our world is organized and run. The Bad Thing happened because all those actors behaved exactly as they are designed to behave.
What's with the hyping of Rust as the Holy Grail as the solution to everything not including P=NP and The Halting Problem?
From a "I would like it as simple and secure as possible," ASCII does tick quite a few boxes.
Most security bugs/holes have been related to buffer [over|under]flows. Statistically speaking, it makes sense to use a language that eliminates those bugs by the mere virtue of the program compiling. Do you disagree with that?
most people I know, even those in mid size businesses tool for and hunt for nation state TAs as well. it's just something you have to do. the line between ecrime and nation state is sooooo thin, you might as well. especially when your talking about NK, were you have nation state level ecrime.
We do have some of those already.
https://www.faa.gov/space/streamlined_licensing_process/medi...
It can however make it extremely difficult to exploit and it can make such use cases very esoteric (and easier to implement correctly).
What I can say is that parsing untrusted data in C is very risky. I can't say it is more risky than phishing for you, or more risky than anything else. I lack the context to do so.
That said, a really easy solution might be to just not do that. Just like... don't parse untrusted input in C. If that's hard for you, so be it, again I lack context. But that's my general advice - don't do it.
I also said "way, way less" not "not at all". I still think about memory safety in our Rust programs, I just don't allocate time to address it (today) specifically.
Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.
For example, in garbage collected languages the programmer does not need to think about memory management all the time, and therefore they can think more about security issues. Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand. And this can be problematic even if Rust solves every security bug in the class of (for instance) buffer overflows.
If you want secure, better use a suitable GC'ed language. If you want fast and reasonably secure, then you could use Rust.
Yeah. If you can catch people in your jurisdiction (without the problems of spoofing and false flags), then people are just going to attack you from outside your jurisdiction. You'd have to firewall your jurisdiction against outside attacks. (You might even be able to do that, by controlling every cable into the country. But then there's satellites...)
"Don't use Rust because it is GC'd" is a take that I think basically nobody working on memory safety (either as a platform concern or as a general software engineering concern) would agree with.
I don't disagree with the premise of your post, which is that time spent on X takes away from time spent on security. I'll just say that I have not had the experience, as a professional rust engineer for a few years now, that Rust slows me down at all compared to GC'd languages. Not even a little.
In fact, I regret not choosing Rust for more of our product, because the productivity benefits are massive. Our rust code is radically more stable, better instrumented, better tested, easier to work with, etc.
https://dropbox.tech/infrastructure/extending-magic-pocket-i...
There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.
Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.
C is a programming language which born at “AT & T’s Bell Laboratories” of USA in 1972. It was written by Dennis Ritchie.
But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?
If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.
If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.
Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.
Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.
Literally one of the best customers of NSO tools is Saudi Arabia (SA), where money literally bursts out of the ground in the form of crude oil. The market cap of Saudi Aramco is 3x that of Apple's. Good luck making it "uneconomical" for SA to exploit iPhones.
I'll even posit that there is literally no reasonable amount where the government of SA cannot afford an exploitation tool. The governments that purchase these tools aren't doing it for shits and giggles. They're doing it because they believe that their targets represent threats to their continued existence.
Think of it this way, if it costs you a trillion dollars to preserve your access to six trillion dollars worth of wealth, would you spend that? I would, in a heartbeat.
I don't think so. State level actors also have limited ressources (and small states have very limited ressources) and everytime they deploy their tools, they risk that they get discovered and anyalized and added to the antivirus heuristics and with that rendered allmost worthless. Or they risk the attention of the intelligence agencies of your state. So when that happens, heads might be rolling, so the heads want to avoid that.
So if there is a state level group looking for easy targets for industry espionage - and they find a tough security, where it looks like people care - I would say chances are that they go look for more easy targets (of which there are plenty).
Unless of course there is a secret they absolutely want to have. Then yes, they will likely get in after a while, if the state backing it, is big enough.
But most hacking is done on easy targets, so "good enough" security means not beeing an easy target, which also means not getting hacked in most of the cases. That is the whole point of "good enough".
1. Rust also has other safety features that may be relevant to your interests. It is Data Race Free. If your existing safe-but-slow language offers concurrency (and it might not) it almost certainly just tells you that all bets are off if you have a Data Race, which means complicated concurrent programs exhibit mysterious hard-to-debug issues -- and that puts you off choosing concurrency unless it's a need-to-have for a project. But with Data Race Freedom this doesn't happen. Your concurrent Rust programs just have normal bugs that don't hurt your brain when you think about them, so you feel free to pick "concurrency" as a feature any time it helps.
2. The big surface area of iMessage is partly driven by Parsing Untrusted File Formats. You could decide to rewrite everything in Rust, or, more plausibly, Swift. But this is the exact problem WUFFS is intended to solve.
WUFFS is narrowly targeted at explaining safely how to parse Untrusted File Formats. It makes Rust look positively care free. You say this byte from the format is an 8-bit unsigned integer? OK. And you want to add it to this other byte that's an 8-bit unsigned integer? You need to sit down and patiently explain to WUFFS whether you understand the result should be a 16-bit unsigned integer, or whether you mean for this to wrap around modulo 256, or if you actually are promising that the sum is never greater than 255.
WUFFS isn't in the same "market" as Rust, its "Hello, world." program doesn't even print Hello, World. Because it can't. Why would parsing an Untrusted File Format ever do that? It shouldn't, so WUFFS can't. That's the philosophy iMessage or similar apps need for this problem. NSO up against WUFFS instead of whatever an intern cooked up in C last week to parse the latest "must have" format would be a very different story.
Doing this well would be hard, but even an imperfect implementation would have some value.
I doubt they made a deal that didn’t directly served either Israeli or US foreign policy and security interest.
I don’t know about the NSO but another player in mobile tracking (Verint) tho very much more LEO oriented (SS7 tracking) had about a million failsafes that ensure that their software cannot be used to track or intercept US or Israeli numbers.
It's 26 years after Java was released. Java has largely been the main competitor to C++. I don't see C++ going away nor do I see C going away. And it's almost always a mistake to lump C and C++ developers together. There is rarely an intersection between the two.
I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.
I don’t mean this in a very critical spirit, though.
Communication is really hard - especially in a large setting where not everyone reads you in the same context, and not everyone means well.
On balance, you post was valuable to me!
Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.
I'm glad the post was of value to you. The talk is really good and I think more people should read it.
Here's the first Microsoft one: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...
And Chrome: https://www.zdnet.com/article/chrome-70-of-all-security-bugs...
The only perfectly secure computer is one that is off. Security is always about probabilities and trade offs. As you approach perfection cost approaches infinity. It’s similar to adding “nines” to your uptime.
A good security policy balances cost with security and also has plans in place for what to do if security is compromised.
A Java program can't write over the return address on the stack.
On the other hand, you could choose to think about communications in an analogous way to your code, both being subject to attack by bad actors trying to subvert your good intentions.
So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.
I also come from a coding background (albeit a long time ago) and with the help of some well meaning bosses over time eventually came to realize, that my messages could gain more influence, by reducing unnecessary attack surface. - Doesn’t mean I always get it right, even now - but I am aware and generally try hard to do just that.
In the limit, a trillion dollar exploit that will be worthless once discovered will only be used with the utmost possible care, on a very tiny number of people. That's way better than something that you can play around with and target thousands.
https://www.theguardian.com/news/2021/jul/19/fifty-people-cl...
I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.
Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.
Not mine. I have no plans to purchase a security key from Google. I have no threat model.
Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.
1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...
2.
https://support.google.com/googlenest/thread/14123369/what-i...
https://www.inc.com/jason-aten/google-is-absolutely-listenin...
https://www.consumerwatchdog.org/blog/people-dont-trust-goog...
https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...
https://www.theguardian.com/technology/2020/jan/03/google-ex...
https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...
But not too long ago, before SaaS, social media, etc, displaced phpBB, WordPress, and other open source platforms, things like SQL injection reigned supreme even in the reported data. Back then CVEs more closely represented the state of deployed, forward-facing software. But now the bulk of this software is proprietary, bespoke, and opaque--literally and to vulnerability data collection and analysis.
How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows? Some, like Stuxnet, but they're considered exceptionally complex; and even in Stuxnet buffer overflows were just one of several different classes of exploits chained together.
Bad attackers are usually pursuing sensitive, confidential data. Access to most data is protected by often poorly written logic in otherwise memory-safe languages.
If you follow the guidelines in http://canonical.org/~kragen/cryptsetup to encrypt the disk on a new laptop, it will take you an hour (US$100), plus ten practice reboots over the next day (US$100), plus 5 seconds every time you boot forever after (say, another US$100), for a total of about US$300. A brute-force attack by an attacker who has killed you or stolen your laptop while it was off is still possible. My estimate in that page is that it will cost US$1.9 trillion. That's the nature of modern cryptography. (The estimate is probably a bit out of date: it might cost less than US$1 trillion now, due to improved hardware.)
Other forms of software security are considerably more absolute. Regardless of what you see in the movies, if your RAM is functioning properly and if there isn't a cryptographic route, there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T. It's like trying to find a number that when multiplied by 0 gives you 5. The money you spend on attacking the problem is simply irrelevant.
Usually, though, the situation is considerably more absolute in the other direction: there are always abundant holes in the protections, and it's just a matter of finding one of them.
Now, of course there are other ways someone might be able to decrypt your laptop disk, other than stealing it and applying brute force. They might trick you into typing the passphrase in a public place where they can see the surveillance camera. They might use a security hole in your browser to gain RCE on your laptop and then a local privilege escalation hole to gain root and read the LUKS encryption key from RAM. They might trick you into typing the passphrase on the wrong computer at a conference by handing you the wrong laptop. They might pay you to do a job where you ssh several times a day into a server that only allows password authentication, assigning you a correct horse battery staple passphrase you can't change, until one day you slip up and you type your LUKS passphrase instead. They might steal your laptop while it's on, freeze the RAM with freeze spray, and pop the frozen RAM out of your motherboard and into their own before the bits of your key schedule decay. They might break into your house and implant a hardware keylogger in your keyboard. They might do a Zoom call with you and get you to boot up the laptop so they can listen to the sound of you typing the passphrase on the keyboard. (The correct horse battery staple passphrases I favor are especially vulnerable to that.) They might remotely turn on the microphone in your cellphone, if they have a way into your cellphone, and do the same. They might use phased-array passive radar across the street to measure the movements of your fingers from the variations in the reflection of Wi-Fi signals. They might go home with you from a bar, slip you a little scopolamine, and suggest that you show them something on your (turned-off) laptop while they secretly film your typing.
The key thing about these attacks is that they are all cheap. Well, the last one might cost a few thousand dollars of equipment and tens of thousands of dollars in rent. None of them requires a lot of money. They just require knowledge, planning, and follow-through.
And the same thing is true about defenses against this kind of thing. Don't run a browser on your secure laptop. Don't keep it in your bedroom. Keep your Bitcoin in a Trezor, not your laptop (and obviously not Coinbase), so that when your laptop does get popped you don't lose it all.
You could argue that, with dollars, you can hire people who have knowledge, do planning, and follow through. But that's difficult. It's much easier to spend a million (or a billion, or a trillion) dollars hiring people who don't. In fact, large amounts of money is better at attracting con men, like antivirus vendors, than it is at attracting people like the seL4 team.
Here in Argentina we had a megalomaniacal dictator in the 01940s and 01950s who was determined to develop a domestic nuclear power industry, above all to gain access to atomic bombs. Werner Heisenberg was invited to visit in 01947; hundreds of German physicists were spirited out of the ruined, occupied postwar Germany. National laboratories were built, laboratory-scale nuclear fusion was announced to have been successful, promises to only seek peaceful energy were published, plans for a nationwide network of fusion energy plants were announced, hundreds of millions of dollars were spent (in today's money), presidential gold medals were awarded...
...and finally in 01952 it turned out to be a fraud, or at best the kind of wishful-thinking-fueled bad labwork we routinely see from the free-energy crowd: https://en.wikipedia.org/wiki/Huemul_Project
Meanwhile, a different megalomaniacal dictator who'd made somewhat better choices about which physicists to trust detonated his first H-bomb in 01953.
One of the reasons for the decline of the British computer industry was Tony Hoare, at one of the big companies (Elliott Brothers, later part of ICL), implemented Fortran by compiling it to Algol, and compiled the Algol with bounds checks. This would have been around 01965, according to his Turing Award lecture. They failed to win customers away from the IBM 7090 (according to https://www.infoq.com/presentations/Null-References-The-Bill...) because the customers' Fortran programs were all full of buffer overflows ("subscript errors", in Hoare's terminology) and so the pesky Algol runtime system was causing them to abort!
(I realize that racing threads can cause logic based security issues. I've never seen a traditional memory exploit from on racing goroutines though.)
Perhaps not with building properties, but very small errors can cause catastrophic failure.
One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.
https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...
In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.
Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.
It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.
I have another neat trick to avoid races. Just write single threaded programs. Whenever you think you need another thread, you either don't need it, or you need another program.
Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?
I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...
Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?
Plot twist: extended ASCII?
No really. You just have to do what just happened happen a couple more times and they are finished. If they can't protect their data they have no business, their reputation is destroyed and there's no point of hiring them if a week later the list of the people you are spying leaks. Turn the game around, info security is asymmetric by definition, it's a lot easier to attack than to defend. As a defender you need to plug all possible holes but If you become the attacker you just need to find one.
Apparently in Golang, you can achieve memory unsafety through data races: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... (though I'm not sure if a workaround has been added to prevent memory unsafety).
Ten years is about the time since C++ 11. I may be wrong, but I do not regret my estimate.
There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.
Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.
Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.
Getting rid of images might be doable, but still difficult. Talking features away from people is politically difficult.
If you have a race which definitely only touches some simple value like an int and nothing more complicated then Go may be able to promise your problem isn't more widespread - that value is ruined, you can't trust that it makes any sense (now, in the future, or previously), but everything else remains on the up-and-up. However, the moment something complicated is touched by a race, you lose, your program has no defined meaning whatsoever.
It really depends on the target. If you’re attacking a website, then sure, you’re more likely to find vulnerability classes like XSS that can exist in memory-safe code. When you’re talking about client-side exploits like the ones used by NSO Group, though, almost all of them use memory corruption vulnerabilities of some sort. (That doesn’t only include buffer overflows; use-after-free vulnerabilities seem to be the most common ones these days.)
The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.
Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.
We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.
I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.
No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).
IT safety = construction safety. What kind of cracks/bumps does your bridge/building have, can it handle increase in car volume over time, lots of new appliances put extra load on the foundation etc. IT safety is very similar in that way.
IT security = physical infrastructure security. Is your construction safe from active malicious attacks/vandalism? Generally we give up on vandalism from a physical security perspective in cities - spray paint tagging is pretty much everywhere. Similarly, crime is generally a problem that's not solvable & we try to manage. There's also large scale terrorist attacks that can & do happen from time to time.
There are of course many nuanced differences because no analogy is perfect, but I think the main tangible difference is that one is in the physical space while the other is in the virtual space. Virtual space doesn't operate the same way because the limits are different. Attackers can easily maintain anonymity, attackers can replicate an attack easily without additional effort/cost on their part, attackers can purchase "blueprints" for an attack that are basically the same thing as the attack itself, attacks can be carried out at a distance, & there are many strong financial motives for carrying out the attack. The financial motive is particularly important because it funds the every growing arms raise between offence & defense. In the physical space this kind of race is only visible in nation states whereas in the virtual space both nation states & private actors participate in this race.
Similarly, that's why IT development is a bit different from construction. Changing a blueprint in virtual space is nearly identical from changing the actual "building" itself & the cost is several orders of magnitude lower than it would be in physical space. Larger software projects are cheaper because we can build reusable components that have tests that ensure certain behaviors of the code & then we rerun them in various environments to make sure our assumptions still hold. We can also more easily simulate behavior in the real world before we actually ship to production. In the physical space you have to do that testing upfront to qualify a part. Then if you need a new part, you're sharing less of the design whereas in virtual space you can share largely the same design (or even the exact same design) across very different environments. & there's no simulation - you build & patch, but you generally don't change your foundation once you've built half the building.
But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.
I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.
Beyond that, I've already addressed phishing at our company, it just didn't seem worth pointing out.
Well, I'm the CEO lol so we have an advantage there.
> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.
Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.
edit: Wait, per month? No no.
> We don't need better crypto.
FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.
But what does this have to do with the FIDO authenticator?
At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".
None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.
I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.
The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.
I have to admire the practicality of the approach they've been taking.
Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.
Additionally the approach they've taken to allocators means that you can use special allocators for testing that can perform even more checks, including leak detection.
I think it's a great idea and a really interesting approach but it's definitely not as rigorous as what Rust provides.
If your program loses track of which file handles are open, which database transactions are committed, which network sockets are connected, GC does not help you at all for those resources, when you are low on heap the system automatically looks for some garbage to get rid of, but when you are low on network sockets, the best it could try is hope that cleaning up garbage disconnects some of them for you.
Rust's lifetime tracking doesn't care why we are tracking the lifetime of each object. Maybe it just uses heap memory, but maybe it's a database transaction or a network socket. Either way though, at lifetime expiry it gets dropped, and that's where the resource gets cleaned up.
There are objects where that isn't good enough, but the vast majority of cases, and far more than under a GC, are solved by Rust's Drop trait.
It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident
>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.
>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".
>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.
The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.
1. Blaming the language instead of the programmer will not lead to improved program quality.
2. C will always be available as a user's language. Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages
3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code, of which a ton is written in C. Programmers in the future who are schooled only in memory-safety languages may not be able to approach C as a learning resource, and may in fact be taught to fear it.
There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors. It is amazing how easily that work is ignored in these debates. Find me a buffer overflow or use-after-free in one of djb's programs.
I disagree. Blaming the language is critically important. Tony Hoare (holds a turing aware, is a genius) puts it well.
> a programming language designer should be responsible for the mistakes that are made by the programmers using the language. [...]
> It's very easy to persuade the customers of your language that everything that goes wrong is their fault and not yours. I rejected that...
[0]
> Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages
Users will always write C. No they won't always be smaller and faster.
> 3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code
Much to society's loss, I'm sure.
> and may in fact be taught to fear it
Cool. Same way we teach people to not roll their own crypto. This is a good thing. Please be more afraid.
> There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors.
No one cares. Not only is that not provably the case, nor is it likely the case, but it's also irrelevant when I'm typing on a computer with a C kernel, numerous C libraries, in a C++ browser, or texting someone via a C++ app that has to parse arbitrary text, emojis, videos, etc.
> Find me a buffer overflow or use-after-free in one of djb's programs.
No, that's a stupid waste of my time. Thankfully, others seem more willing to do so[1] - I hate to even entertain such an arbitrary, fallacious benchmark, but it's funny so I'll do it just this once.
[0] http://blog.mattcallanan.net/2010/09/tony-hoare-billion-doll...
[1] http://www.guninski.com/where_do_you_want_billg_to_go_today_...
>"A more worrying set of attacks appear to use Apple’s iMessage to perform “0-click” exploitation of iOS devices. Using this vector, NSO simply “throws” a targeted exploit payload at some Apple ID such as your phone number, and then sits back and waits for your zombie phone to contact its infrastructure."
Does anyone have a link or any resources that describe how this “0-click” exploitation" works?
Could you say why Java is not susceptible to ROP?
Give users the option. If you're not 100% confident in your parsing (and nobody should be), allow users the option to restrict parsing to something that's limited, tested, fuzzed, and generally trusted. People who care can turn it on. People who want touch memojis on their watch can leave it off.
The safety vs security distinction made above is fundamental. Developers are faced with solving an entire class of problems that is barely addressed by the rest of the engineering disciplines.
Until something changes how the Internet works, the very moment we sent something across the Internet through a service, we no longer have control over the data. Pegasus is just the tip of an iceberg and with technology become closer to us, i.e. homepods, smart electronic appliances, it is just time until all the big brands are hiring hackers to spy on their customers so that they can produce better product (make more money).
Going forward, privacy is not supposed to be a personal preference like how platforms nowadays makes us click through different settings to opt out, it is supposed to be something we all collective have out of the box and we need to work together towards that goal.
How do you imagine this would work?
The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.
I think you're imagining a lot of moving parts to the "solution" that don't exist.
I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.
At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.
And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.
So it's not like Apple isn't trying to improve things.
Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).
There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.
[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...
The only correct answer is formal reasoning, as successfully executed by seL4.
And how do I enroll all my employees into GitHub/GitLab?
And how do I recover when a YubiKey gets lost?
And how do I ...
Sure, I can do YubiKeys for myself with some amount of pain and a reasonable amount of money.
Once I start rolling secure access out to everybody in the company, suddenly it sucks. And someone spends all their time doing internal customer support for all the edge cases that nobody ever thinks about. This is fine if I have 10,000 employees and a huge IT staff--this is not so fine if I've got a couple dozen employees and no real IT staff.
That's what people like okta and auth0 (now bought by okta) charge so bloody much for. And why everybody basically defaults to Microsoft as an Identity Provider. etc.
Side note: Yes, I do hand YubiKeys out as trios--main use, backup use (you lost or destroyed your main one), and emergency use (oops--something is really wrong and the other two aren't working). And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.
You know what you could do to make NSO's life harder, other than develop more security? Fight them. Have your politicians attack Israel for allowing the group to operate. Use sanctions, take a proportional response, condemn their actions at the UN, stop protecting them. Block Israel from the Apple Store. It's a more direct route, and more likely to succeed than making your goal "perfect security". (Would there be huge political challenges? Of course; but those are more approachable than "perfect security")
Nitpick: for only about US$1M (give or take a order of magnitude or two depending on location), the process (assuming network access) can hire a assassin to kill you, pull up a shell on your computer, and give the process whatever priviledges it wants.
Remotely, anonymously, at virtually no risk to themselves.
It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.
Anything so evangelized is met with strong skepticism here.
- safety is "the system cannot harm the environment"
- security is the inverse: "the environment cannot harm the system"
To me, your distinction has to do with the particular attacker model - both sides are security (under these definitions).
But there's the problem: Testing can't and won't cover all inputs that a malicious attacker will try [1]. Now you've tested all inputs you can think of with runtime checks enabled, you release your software without runtime checks, and you can be sure that some hacker will find a way to exploit a memory bug in your code.
[1] Except for very thorough fuzzing. Maybe. If you're lucky. But probably not.
> Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera)
I am not a mechanical engineer, but none of these examples look like smooth functions to me. I would expect that an unexpectedly high wind can cause your structure to move in way that is not covered by your model at all, at which point it could just show a sudden non-linear response to the event.
Basically, my reasoning here is that Apple knows it is exposing users to hacks because of quality issues with this and other components. The fact that they try to fix them as fast as they find them is nice but not good enough: people still get hacked. When the damage is mostly PR, it's manageable (to a point). But when users sue and start claiming damages, it becomes a different matter: that gets costly and annoying real quick.
Recently we have seen several companies embrace Rust for OS development. Including Apple even. Both Apple and Google have also introduced languages like Swift and Go that likewise are less likely to have issues with buffer overflows. Switching languages won't solve all the problems but buffer overflows should largely be a thing of the past. So, we should encourage them to speed that process up.
This isn't unique to SMS, obviously, since the same attack scenario works against e.g. a TOTP from a phone app.
I would have liked to see a secure QNX as a mainstream OS. The microkernel is about 60Kb, and it offers a POSIX API. All drivers, file systems, networking, etc. are in user space. You pay about 10%-20% overhead for message passing. You get some of that back because you have good message passing available, instead of using HTTP for interprocess communication.
https://www.cvedetails.com/vulnerability-list/vendor_id-1902...
What if the bug is in std?
What if I use a bugged Vec::from_iter?
What if I use the bugged zip implementation from std?
You'll probably blame unsafe functions, but those unsafe functions were in std, written by the people who know Rust better than anyone.
Imagine what you and me could do writing unsafe.
Imagine trusting a 3rd party library...
I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.
So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.
- Safety is a PvE game[0] - your system gets "attacked" by non-sentient factors, like weather, animals, or people having an accident. The strength of an attack can be estimated as a distribution, and that estimate remains fixed (or at least changes predictably) over time. Floods don't get monotonically stronger over the years[1], animals don't grow razor-sharp titanium teeth, accidents don't become more peculiar over time.
- Security is a PvP game - your system is being attacked by other sentient beings, capable of both carefully planning and making decisions on the fly. The strength of the attack is unbounded, and roughly proportional to how much the attacker could gain from breaching your system. The set of attackers, the revenue[2] from an attack, the cost of performing it - all change over time, and you don't control it.
These two types of threats call for a completely different approach.
Most physical engineering systems are predominantly concerned with safety - with PvE scenarios. Most software systems connected to the Internet are primarily concerned with security - PvP. A PvE scenario in software engineering is ensuring your intern can't accidentally delete the production database, or that you don't get state-changing API requests indexed by web crawlers, or that an operator clicking the mouse wrong won't irradiate their patient.
--
[0] - PvE = "Player vs Environment"; PvP = "Player vs Player".
[1] - Climate change notwithstanding; see: estimate changing predictably.
[2] - Broadly understood. It may not be about the money, but it can be still easily approximated in dollars.
Not disagreeing, just mentioning.
Naturally, she was wearing gloves.
Seeing me, she grabbed the dustpan, threw away her sweepings, put the broom away, and was prepared to now serve me...
Still wearing the same gloves. Apparently magic gloves, for she was confused when I asked her to change them. She'd touched the broom, the dustpan, the floor, stuff in the dustpan, and the garbage. All within 20 seconds of me seeing her.
Proper procedure, understanding processes, are far more effective than a misused tool.
Is rust mildly better than some languages? Maybe.
But it is not a balm for all issues, and as you say, replacing very well maintained codebases might result in suboptimal outcomes.
I think you have a false perception of the budgetary constraints mid-level state actors are dealing with. Most security agencies have set budgets and a large number of objectives to achieve, so they'll prioritize cost-effective solutions/cheap problems (whereby the cost is both financial and political but finances act as hard constraint). Germany actually didn't buy Pegasus largely because it was too expensive.
Without Pegasus, Morocco's security apparatus probably wouldn't have the resources otherwise to target such a wide variety of people, ranging from Macron to their own king.
This is a useless security nihilism. Xen is much more secure than anything else in terms of hole history. And Qubes relies on hardware virtualization, not software. Most famous escape from it was discovered by the Qubes founder ("Blue Pill").
The size of Linux in dom0 does not matter, because it has no network, does not run any apps and is only used to manage VMs. There is just no way for an attacker to exploit a bug there.
>formal reasoning
I hope this is the future, but unfortunately it's not the present yet.
That's true, but this is one of the cases where obtaining the last 5-10% of clarify might require 90% of the total effort.
Now whether one actually already has plucked all the low-hanging fruit in their own communication and if it's already good -- that's a separate discussion.
In software, you can spec the behavior of your program. And then it is possible to code to that exact spec. It is also possible, with encryption and stuff, to write specs that are safe even when malicious parties have control over certain parts.
This is not to say that writing such specs is easy, nor that coding to an exact spec is easy. Heck, I would even doubt that it is possible to do either thing consistently. My point is, the challenge is a lot harder. But the tools available are a lot stronger.
Its not a lost cause just because the challenge is so much harder.
Rust Lang strives for safety and safety is no 1 priority. Regarding the unsafe in std please read the source code just to know how much careful they are with the implementation. They only use unsafe for performance and even with unsafe rust, it doesn't provide too much freedom tbh.
The 3rd party thing you are referring etc sounds childish. They are not the rust lang fault tbh. If you don't trust them don't use. It is as simple as that.
So I think telling people rust program that doesn't have unsafe will not have memory safety bugs. Exceptions to this statement do occurs but are rare.
Well, they haven't done that since what, the early XX century, what with "he may be a son of a bitch, but he's our son of a bitch"? If the US really cared that much about "democracy", perhaps they'd have by now sorted it out in, say, Mexico already: after all, that's their closest-connected country after Canada, right? Yet the last time I've checked the news about upcoming Mexican elections, there were quite a lot of dead/missing opposition candidates reported (just as it was during the previous elections, and pre-previous, and...) Interestingly enough, the US press doesn't report much on that: apparently things in Afghanistan, across half the globe, are much more important for home security.
Edit:thinking about it, without man in the middle the phisher can login, but cannot make transfers (assuming the SMS shows what transfer is beiing authorized). Still bad enough.
Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.
What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.
Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.
These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.
"That's all I know."
It is a different story in languages meant to run untrusted code of course.
For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...
I don't know if disabling iMessages is enough though in this case.
The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.
> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising
Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.
Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.
It turned out that they were a target for state level actors, as their software update distribution mechanism was used in a "watering hole attack" to infect many companies worldwide (major examples are Maersk and Merck) in the NotPetya data destruction (not ransomware, as it's often wrongly described) attack, causing billions of dollars in damage. Here's an article about them https://www.bleepingcomputer.com/news/security/m-e-doc-softw...
In essence, you may be an actual target for state level actors not because they care about you personally, but because you just supply some service to someone whom they're targeting.
Currently, some blackhat somewhere finds a vulnerability and sells it to NSO and then NSO sells it to various countries. If Israel forbids such deals, then the same "someone's" (without regard of where they're located - those deals are essentially unregulatable, you might anonymously trade knowledge/PoC for crypto) will sell the vulnerability to NSOv2 headquartered in Panama or Mozambique, and NSOv2 will sell it to the same customers.
If we can raise the cost from $100k per target to $10m per target, even SA will reduce the number and breadth of targets.
They do have limited funds, and they want to see an ROI. At a lower cost, perhaps they’ll just monitor every single journalist who has ever said a bad thing about the king. As that price increases, they’ll be more selective.
Like Matt said, that’s not ideal. But forcing a more highly targeted approach rather than the current fishing trawler is an incremental improvement.
There are plenty security standards for many things that are not computers, yet cyber is a weird exception.
My understanding is both a liberal silicon valley state of mind, combined with the NSA benefiting from low security standards and having a monopoly on tech companies.
In my view, the computer security industry is too blame here, because they benefit from chaos and a lack of government intervention.
"The same guy who is the subject of your fallacious benchmark. He writes in C" and that crypto code, which we used was more secure than rolling our own, but it still is riddled with security bugs because he writes in C (e.g. Heartbleed) - and despite the fact that those particular bugs have been fixed, that code still isn't trustworthy enough just because it's written in C, likely has more issues undetected and needs to be rewritten and replaced eventually with some not-C solution that can remove a whole class of bugs accidentally causing arbitrary code execution. Sure, you'll still have logic bugs - but a logic bug in iMessage image parsing has much lower consequences than a memory safety issue in that same image parsing.
So alas, even if on every previous transaction, Grannie was told, "Please read the SMS carefully and only fill out the code if the transfer is correctly described", she may not be suspicious when this time the bank (actually a phishing site) explains, "Due to a technical fault, the SMS may indicate that you are authorising a transfer. Please disregard that". Oops.
† e.g. some modern "refund" scams involve a step where the poor user believes they "slipped" and entered a larger number than they meant to, but actually the bad guys made the number bigger, the user is less suspicious of the rest of the transaction because they believe their agency set the wheels in motion.
Preventing buffer overruns require language-level support.
Non proliferation treaties are effective against nuclear weapons theyd be effective against "cyber" weapons.
If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.
However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:
For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.
Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.
But personal attacks are not cool. Keep it civil, please.
While the first part of the sentence is mostly true (although the intention is to make safe Zig memory safe, and unsafe Rust isn't safe either), the second isn't. The goal isn't to use a safe language, but to use a language that best reduces certain problems. The claim that the best way to reduce memory safety problems is by completely eliminating all of them regardless of type and regardless of cost is neither established nor sensical;. Zig completely eliminates overflows, and, in exchange for the cost of eliminating use-after-free, makes detecting and correcting it, and other problems, easier.
For WebAuthn (and its predecessor U2F) that "non-trivial" amount seems to be precisely AWS. The specification tells them to allow multiple devices to be enrolled but they don't do it.
The other bizarre aspect of this logic is that not only is the author of the code irrelevant but apparently the task is, too. It would appear to apply to, e.g., even the most simple programs. The only factor that matters is "written in C". I use sed every day. It's written in C. Show me the bugs. I will probably be dead before someone finds them. Will I be using a "memory-safe" sed before then.
But this shouldn't be called “memory safety”.
Earlier you gave the example of Facebook harvesting people's phone numbers. That's not just data that's information. But a Yubikey doesn't know your phone number, how much you weigh, where you live, what type of beer you drink... no information at all.
The genius thing about the FIDO Security Key design is figuring out how to make "Are you still you?" a question we can answer. Notice that it can't answer a question like "Who is this?". Your Yubikey has no idea that you're o8r3oFTZPE. But it does know it is still itself and it can prove that when prompted to do so.
And you might think, "Aha, but it can track me". Nope. It's a passive object unless activated, and it also doesn't have any coherent identity of its own, so sites can't even compare notes on who enrolled to discover that the same Yubikey was used. Your Yubikey can tell when it's being asked if it is still itself, but it needs a secret to do that and nobody else has the secret. All they can do is ask that narrow question, "Are you still you?".
Which of course is very narrowly the exact authentication problem we wanted to solve.
https://twitter.com/evacide/status/1416968243642724353?s=21
The same logic applies. You will not achieve perfect privacy online but there is plenty you can do to make tracking you so much harder.
A Java program, by construction, cannot write to memory regions not allocated on the stack or pointed to by a field of an object constructed with "new". Runtime checks prevent ordinary sorts of problems and a careful memory model prevents fun with concurrency errors. There are interesting attacks against the Java Security Manager - but this is independent of memory safety.
We are on a thread about "a case against security nihilism".
1. Not all vulnerabilities are memory safety vulnerabilities. The idea that adopting memory safe languages will prevent all vulns is not only a strawman, but empirically incorrect since we've had memory safe languages for many decades.
2. It is the case that a tremendously large number of vulns are caused by memory safety errors and that transitioning away from memory-unsafe languages will be a large win for industry safety. 'unsafe' is a limitation of Rust, but compared to the monstrous gaping maw of eldritch horror that is C and C++, it is small potatoes.
3. You are going to struggle to write real programs without ever using third party code.
If the solution to the "problem" is giving increasingly more personal information to a tech company, that's not a great solution, IMO. Arguably, from the user's perspective, it's creating a new problem.
Most users are not going to purchase YubiKeys. It's not a matter of whether I use one, what I am concerned about is what other users are being coaxed into doing.
There are many problems with "authentication methods" but the one I'm referring to is giving escalating amounts of personal information to tech companies, even if it's under the guise "for the purpose of authentication" or argued to be a fair exchange for "free services". Obviously tech companies love "authenticating" users as it signals "real" ad targets.
The "tech" industry is riddled with conflicts of interest. That is a problem they are not even attempting to solve. Perhaps regulation is going to solve it for them.
Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.
Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.
The Mozilla case study is not a real world study. It simply looks at the types of bugs that existed and says "I promise these wouldn't have existed if we had used Rust". Would Rust have introduced new bugs? Would there be an additional cost to using Rust? We don't know and probably never will. What we care about is preventing real world damage. Does Rust prevent real world damage? We have no idea.
but it's not just NSO, every reasonable country probably has people like them.
Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.
All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.
What I'm saying is that truth is a matter of debate. We believe lots of things based on evidence much less rigorous than a formal proof in many cases - like most modern legal systems, which rely on various types of evidence, and then a jury that must form a consensus.
So saying "there is no evidence" is sort of missing the point. Safe Rust does not have memory safety issues, barring compiler bugs, therefor common sense as well as experience with other languages (Java, C#, etc), would show that that memory safety issues are likely to be far less common. Maybe that isn't the evidence that you're after, but I find that compelling.
To me, the question of "does rust improve upon memory safety relative to C/C++" is obvious to the point that it really doesn't require justification, but that's just me.
I could try to find more evidence, but I'm not sure what would convince you. There's people fuzzing rust code and finding far fewer relevant vulns - but you could find that that's not compelling, or whatever.
On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.
As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.
Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.
It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.
A language and a memory access model are no panacea. 10 years is like the day after tomorrow in many industries.
Sure it was, if you didn't want this problem you'd be fine with remaining anonymous and receiving only services that can be granted anonymously. I understand reading Hacker News doesn't require an account, and yet you've got one and are writing replies. So yes, you created the problem.
Now, Hacker News went with 1970s "password" authentication. Maybe you're good at memorising a separate long random password for each site, and so this doesn't really leak any information it's just data. Lots of users seem to provide the names of pets, favourite sports teams, cultural icons, it's a bit of a mish-mash but certainly information of a sort.
In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?".
There are runtime checks around class structure that ensure that a field load cannot actually read some unexpected portion of memory.
There are runtime checks that ensure that you cannot read through a field on a deallocated object, even when using weakreference and therefore triggering a GC even while the program has access to that field.
There are runtime checks around array reads that ensure that you cannot access memory outside of the allocated bounds of the array.
I have no idea why "susceptible to something like ROP" is especially relevant here. ROP is not the same as "writing over the return address" ROP is a technique you use to get around non-executable data sections and happens after you abuse some memory safety error to write over the return address (or otherwise control a jump). It means "constructing an exploit via repeated jumps to already existing code rather than jumping into code written by the attacker".
But just for the record, Java does have security monitoring of the call stack that can ensure that you cannot return to a function that isn't on the call stack so even if you could change the return target the runtime can still detect this.
The brilliant thing about RAAI style resource management is that library authors can define what happens at the end of an object's lifetime and the Rust compiler enforces the use of lifetimes.
Still, this kind of thing isn't always applicable. If the seL4 kernel in question is on orbit, or running on a computer at an unknown location, or in a submarine, or in a drone in flight, the assassin can't in practice sit down at the console. And if it's running on something like the Secure Enclave chip in an iPhone, or a permissive action link, physical access may be impractically difficult regardless of who you kill.
In essence, NSO their income is (price of exploits) * (number of exploit customers).
If the price of exploits goes up, that doesn't mean their income does. That depends on how the price affects the number of customers. Governments have lots of money to spend, but generally they still have some price sensitivity. Especially the more fringe governments.
I am not sure what the effect on NSO their income would be.
This is generally hard. Because you gotta know, at the time of being tortured, which fake secret will give believable results.
If Apple truly secured the OS to the point where even state level actors could not access they bring some unwanted attention, regulation, etc from power governmental agencies.
Apple has also been interestingly silent /vague on a response to this story.
Qubes devs are welcome to adopt seL4's VMM virtualization solution.
In seL4's virtualization design, VMM handles VM exceptions, and yet has no more privileges (capabilities, enforced by seL4, which is thoroughly formally proven) than the VM itself, thus an escape from VM to VMM would yield no fruit.
I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.
I think people don't realize just how much modern C++ has changed.
Its hyperbole. If the argument was "few people can write C without bugs" that would be much easier to digest.
Regardless of intent, it seems very much in the spirit of trying to solve a complex problem by adding more complexity, a common theme I see in "tech".
There is nothing inherently wrong with the idea of "multi-factor authentication" (as I recall some customer-facing organisations were using physical tokens long before "Web 2.0") however in practice this concept is being (ab)used by web-based "tech" companies whose businesses rely on mining personal data. The fortuitous result for them being intake of more data/information relating to the lives of users, the obvious examples being email addresses and mobile phone numbers.
1. This is not an issue I came up with in a vacuum. It is shared by others. I once heard an "expert" interviewed on the subject of privacy describe exactly this issue.
It's true that you can't charge $2MM for a Firefox exploit right now. But that's because someone else is selling that exploit for an (orders of magnitude) lower price. So NSO can't just jack up exploit prices to soak the IC.
But if all exploit prices for a target are driven up, everywhere, my contention is that the IC will shrug and pay. That's because the value per dollar for exploits is extremely high compared to the other sources of intelligence the IC has, and will remain extremely high almost no matter how high you can realistically drive their prices. The fact is that for practically every government on the planet, the dollar figures we're talking about are not meaningful.
And yet here's a thread in which you did exactly that.
C programs are not inherently smaller and faster but in practice this is usually the case. Can you guide me to some Rust programs that are smaller than their C counterparts. The thing that holds me back from experimenting more with Rust is the (apparently) enormous size of the development environment relative to a GCC toolchain.
The number of downlaods from crates.io is questionably large and some of the binaries I have produced were absolutely gigantic. Largest executables I have ever compiled. Crazy.
We do not "lose" if people keep writing in C as long as its the right people. The right programmer for the job. All programmers are not created equal no matter what languages they use. Absent professional certifications and enforceable quality standards, perhaps the world of writing software for use by others needs an ethos something along the lines of "code within your means". Memory-safe languages are great but it seems like they just enable people to become far too ambitious in what they think they can take on. This is no problem at all unless and until they start marketing their grand creation to undiscerning users who are none the wiser. (This is of course the general idea behind the "dont roll your own" meme. However, I do not think it should be limited to cryptography.)
To be fair, I wouldn't quite label it as "absurd", though it is hyperbole. With near-extreme levels of discipline you can write very solid C code - this involves having ~100% MCDC coverage, using sanitizers, static analysis tools, and likely outright banning a few functions. It's doable, especially if your project doesn't have to move fast or has extreme requirements (spaceships).
> Can you guide me to some Rust programs that are smaller than their C counterparts.
Rust has a big standardlib compared to C++ so by default you end up with a lot of "Extra" stuff that helps deal witht hings like unicode, etc. If you drop std and dynamically link your libraries you can drop a lot of space and get down to C levels.
There are a number of examples like this: https://cliffle.com/blog/bare-metal-wasm/
> is the (apparently) enormous size of the development environment relative to a GCC toolchain.
I can't really relate to this. I have a 1TB SSD, 32GB of ram, and an 8 core CPU. My rust build tools are a single command to install and I don't know or care how much space they take up. If you do, I don't really know why, but sure that's a difference maybe.
> All programmers are not created equal no matter what languages they use
While this is true, it doesn't matter practically.
1. We can't restrict who writes C, so even if programmer skill was the kicker, it isn't enforceable.
2. There are lots of projects that invest absolutely incredible amounts of time and money, cutting edge research, into writing safe low level code. Billions are spent on this. And the problem persists. Very very few projects seem to be able to achieve "actually looks safe" C code.
> Memory-safe languages are great but it seems like they just enable people to become far too ambitious in what they think they can take on.
I don't really see how Rust is any different from Python, Java, or anything else in that regard.
No, they want weapons that can project and multiply threat. Nukes are just one way of doing that.
Restricting who can write C is another "extreme" idea in line with "no one can write secure C". I will not call it hyperbole but I think its absurd.
What we can do is be more cognizant of who is writing the software we use. (For example, I use software written in C by Robert Dewar, co-founder of AdaCore, called spitbol. A big part of why I use it is because of who wrote it, the code itself and its history.)
Not caring how much space something occupies is not something to which I can relate. I always care. I do not have unconstrained computers. Each has a finite amount of resources and I try to use them in a controlled and efficient manner. That means avoiding lots of large, amorphous software programmers use without question. For me, this works quite well.
Intentionally ignoring who writes the software I use does not make sense to me either. I think in a previous comment you mentioned Heartbleed. It seems that countless people using OpenSSL were relying on it heavily without ever bothering to investigate anything about its source. That to me was strange. We read comments from people who were "shocked" to find out who was managing the project. Total lack of curiosity. They never bothered to look. Not a great recipe for learning.
No, I am responding to the above assertion that I have insisted security keys give esacalating amounts of personal information to "tech" companies.
This is incorrect. Most users do not have physical security tokens. But "tech" companies promote authentication without using physical tokens: 2FA using a mobile number.
What I am "insisting" is that "two-factor authentication" as promoted by tech campanies ("give us your mobile number because ...") has resulted in giving increasing amounts of personal information to tech companies. It has been misused; Facebook and Twitter were both caught using phone numbers for advertising purposes. There was recently a massive leak of something like 550 million Facebook accounts, many including telephone numbers. How many of those numbers were submitted to Facebook under the belief they were needed for "authentication" and "security". I am also suggesting that this "multi-factor authentication" could potentially increase to more than two factors. Thus, users would be giving increasing amounts of personal information to "tech" companies "for the purposes of authentication". That creates additional risk and, as we have seen, the information has in fact been misused. This is not an idea I came up with; others have stated it publicly.
Concurrent Pascal or Singularity also fit the bill, with actual operating systems being written in it.
Essentially exploits are sold massively under their "true value" and NSO doesn't get to capture this value because there are so many others giving them away for free.
It seems to me that a lot of exploits / PoCs are developed by security researchers doing it for the sport and making a name for themselves. This is probably part of the reason why exploits are so cheap. So then the question is, how much less productive will these researchers be if building exploits gets harder.
My feeling is that they will put in roughly the same amount of time. And hence their exploit production will probably drop proportionally to how much harder exploits are to find.
As for tooling, things like valgrind provide an excellent mechanism for ensuring that the program was memory safe, even in it's "unsafe" areas or when calling into external libraries (something that rust can't provide without similar tools anyway).
My broader point is that safety is more than just a compiler saying "ok you did it", though that certainly helps. I would trust well written safety focused C++ over Rust. On the other hand, I would trust randomly written Rust over C++. Rust is good for raising the lower end of the bar, but not really the top of it unless paired with a culture and ecosystem of safety focus around the language.
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.
I'm sure you really are worried about how "Facebook are bad", and you feel like you need to insert that into many conversations about other things, but "Facebook are bad" is irrelevant here.
You made a bogus claim about Security Keys. These bogus claims help to validate people's feeling that they're helpless and, eh, they might as well put up with "Facebook are bad" because evidently there isn't anything they can really do about it.
So your problem is, which is more important, to take every opportunity to surface the message you care about "Facebook are bad" in contexts where it wasn't actually relevant, or to accept that hey, actually you're wrong about a lot of things, and some of those things actually reduce the threat from Facebook ? I can't help you make that choice.
He goes on to discuss the expansion of "trust boundaries".
Big Tech: Use our computers, please!
There isn't even specific language support necessary, it's on the library level.
Since we're in a thread about security this is a crucial difference. I'm sure Amy, Bob, Charlie and Deb were able to use the new version of Puppy Simulator successfully for hours without any sign of unsafety in the sanitizer. Good to go. Unfortunately, the program was unsafe and Evil Emma had no problem finding a way to attack it. Amy, Bob, Charlie and Deb had no reason to try naming a Puppy in Puppy Simulator with 256 NUL characters, so, they didn't, but Emma did and now she's got Administrator rights on your server. Oops.
In contrast safe Rust is actually safe. Not just "It was safe in my tests" but it's just safe.
Even though it might seem like this doesn't buy you anything when of course fundamental stuff must use unsafe somewhere, the safe/unsafe boundary does end up buying you something by clearly delineating responsibility.
For example, sometimes in the Rust community you will see developers saying they had to use unsafe because, alas, the stupid compiler won't optimise the safe version of their code properly. For example it has a stupid bounds check they don't need so they used "unsafe" to avoid that. But surprisingly often, another programmer looks at their "clever" use of unsafe, and actually they did need that bounds check they got rid of, their code is unsafe for some parameters.
For example just like the C++ standard vector, Rust's Vec is a wasteful solution for a dozen integers or whatever, it does a heap allocation, it has all this logic for growing and shrinking - I don't need that for a dozen integers. There are at least two Rust "small vector" replacements. One of them makes liberal use of "unsafe" arguing that it is needed to go a little faster. The other is entirely safe. Guess which one has had numerous safety bugs.... right.
Over in the C++ world, if you do this sort of thing, the developer comes back saying duh, of course my function will cause mayhem if you give it unreasonable parameters, that was your fault - and maybe they update the documentation or maybe they don't bother. But in Rust we've got this nice clean line in the sand, that function is unsafe, if you can't do better label it "unsafe" so that it can't be called from safe code.
This discipline doesn't exist in C++. The spirit is willing but the flesh (well, the language syntax in this case) is too weak. Everything is always potentially unsafe and you are never more than one mistake from disaster.
And this is where the argument breaks down for me. The C++ vector class can be just as safe if people are disciplined. And as you even described, people in rust can write "unsafe" and do whatever they want anyway to introduce bugs.
The language doesn't really seem to matter at the end of the day from what you are telling me (and that's my main argument).
With the right template libraries (including many parts of the modern C++ STL) you can get the same warnings you can from Rust. One just makes you chant "unsafe" to get around it. But a code review should tell off any developer doing something unsafe in either language. C++ with only "safe" templates is just as "actually safe" as rust is (except for with a better recovery solution than panics!).