curl https://some-url | sh
I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don’t we have something better than “sh” for this? Something with less power to do harm?
I think safer approach is to:
- Download the script first, review its contents, and then execute.
- Ensure the URL uses HTTPS to reduce the risk of man-in-the-middle attacks
If you’ve downloaded and audited the script, there’s no reason to pipe it from curl to sh, just run it. No https necessary.
The https is to cover the factthat you might have missed something.
I guess I download and skim out of principle, but they might have hidden something in there.
Wat. All https does is encrypt the connection when downloading. If you’ve already downloaded the file to audit it, then it’s in your drive, no need to use curl to download it again and then pipe it to sh. Just click the thing.
Yeah, https was for downloading it in the first place. My bad, I didn’t get my thoughts out in the right order.
That makes sense. I probably should have gotten it from context.
Install scripts are bad in general. ideally use officially packaged software.
But then they’d have to lay some guy 15$ to package it and thats like, spending money
Distros do the packaging. Devs can not be trusted
Loads of distros have user packing like arch and nixos… also many distors accept donations to package your software either way so my point stands even then.
Meanwhile nix install instructions start of with a curl
?
What part is confusing you?
the instructions for installing on not nixos https://nixos.org/download/
That’s how you end up without software.
That’s how you end up with a secure well tested system. Having the distro do software reviews adds another level of validation. Devs are bad about shipping software with vulnerable dependencies and stuff like that.
And then you install wordpress, lol.
Key being reduce. Https doesn’t protect from loads of attacks. Best to verify the sig.
If its not signed, open a bug report
If steam accidentally deleted someone’s home directory in a bash script via a single error, I doubt I would catch that one myself.
Ah yes for all of the bash experts who understand what they are reading.
It’s not much different from downloading and compiling source code, in terms of risk. A typo in the code could easily wipe home or something like that.
Obviously the package manager repo for your distro is the best option because there’s another layer of checking (in theory), but very often things aren’t in the repos.
The solution really is just backups and snapshots, there are a million ways to lose files or corrupt them.
You should use officially packaged software. That’s the safest option.
Yeah when you can, often stuff isn’t in it though.
Debian has 60,425 packages. I would recommend that you create a Debian container with distrobox and install whatever you need. If you need newer versions you can use Debian Sid.
Yeah it’s often missing CLI tools that are from small devs who can’t do packaging.
It isn’t more dangerous than running a binary downloaded from them by any other means. It isn’t more dangerous than downloaded installer programs common with Windows.
TBH macOS has had the more secure idea of by default using sandboxes applications downloaded directly without any sort of installer. Linux is starting to head in that direction now with things like Flatpak.
You shouldn’t install software from someone you don’t trust anyway because even if the installation process is save, the software itself can do whatever it has permission to.
“So if you trust their software, why not their install script?” you might ask. Well, it is detectable on server side, if you download the script or pipe it into a shell. So even if the vendor it trustworthy, there could be a malicious middle man, that gives you the original and harmless script, when you download it, and serves you a malicious one when you pipe it into your shell.
And I think this is not obvious and very scary.
it is detectable […] server side, if you download the script [vs] pipe it into a shell
I presume you mean if you download the script in a browser, vs using curl to retrieve it, where presumably you are piping it to a shell. Because yeah, the user agent is going to reveal which tool downloaded it, of course. You can use curl to simply retrieve the file without executing it though.
Or are you suggesting that curl makes something different in its request to the server for the file, depending on whether it is saving the file to disk vs streaming it to a pipe?
It is actually a passive detection based of the timing of the chunk requests. Because curl by default will only request new chunks when the buffer is freed by the shell executing the given commands. This then can be used to detect that someone is not merely downloading but simultaneously executing it. Here’s a writeup about it:
You can also find some proof-of-concept implementations online to try it out yourself.
Wow, thanks for this. That is very helpful context. And thanks for your original post too, or I’d never have asked.
Oh, you’re welcome, kind person :)
it is detectable on server side, if you download the script or pipe it into a shell
Irrelevant. This is just an excuse people use to try and win the argument after it is pointed out to them that there’s actually no security issue with
curl | bash
.It’s waaaay easier to hide malicious code in a binary than it is in a Bash script.
You can still see the “hidden” shell script that is served for Bash - just pipe it through
tee
and then into Bash.Can anyone even find one single instance of that trick ever actually being used in the wild (not as a demo)?
I never tried to win any argument. Hell I was not even aware that I’m participating in one. I just wanted to share the info, that even if the vendor is absolutely trustworthy and even if you validated the script by downloading and looking at it, there’s still another hole that’s not obvious to see.
Yes it’s unlikely, but again, I never said it were. There are also arguments you can run curl with, to tell it to do the download first and then push it through the pipe afterwards, though I don’t know them by heart now.
It won’t cost you anything to set those parameters, when you insist to use curl | bash, just in the off chance that someone’s trying to do what I mentioned.
But I’m also someone who usually validates their downloads with a checksum so maybe I’m just weird. Who knows.
Those just don’t get installed. I refuse to install stuff that way. It’s to reminiscent of installing stuff on windows. “Pssst, hey bud, want to run this totally safe executable on your PC? It won’t do anything bad. Pinky promise”. Ain’t happening.
The only exception I make is for nix on non-nixos machines because thwt bootstraps everything and I’ve read that script a few times.
How is that safe?
It’s not, it’s a sign that the authors don’t take security seriously.
If you use this
I never do.
Am I the only one who cringes when I have to update my system?
How do I know the maintainers of the repo haven’t gone rogue and are now distributing malware?
DAE get anxious when running code on computer?
I think for the sake of security we should just use rocks, stones, and such to destroy all computers, as this would prevent malicious software from being executed.
How do I know the maintainers of the repo haven’t gone rogue and are now distributing malware?
Depends on the repo but at least for Debian, there’s a path of trust between GPG keys I’ve signed and the Debian release GPG keys.
How do you know that the malware goblin hasn’t installed malware on your computer when you weren’t looking?
I think the only foolproof plan is using boulders, stones, and perhaps other blunt objects to deal with the issue of code executing altogether.
What are you trying to say?
If there’s code running on a machine, there’s a possibility it’s malicious or unsafe, the only solution is destruction of anything that can run code.
I realise you’re trolling but actually yes. This is why I use Debian stable where possible - if egregious malware shows up it will probably be discovered by all the folks using rolling distros first.
For security reasons, I review every line of code before it’s executed on my machine.
Before I die, I hope to take my ‘93 dell optiplex out of its box and finally see what this whole internet thing is about.
Not good enough. You should really be inspecting your CPU with a microscope.
To answer the question, no - you’re not the only one. People have written and talked about this extensively.
Personally, I think there’s a lot more nuance to the answer. Also a lot has been written about this.
You mention “communities that are security conscious”. I’m not sure in which ways you feel this practice to be less secure than alternatives. I tend to be pretty security conscious, to the point of sometimes being annoying to my team mates. I still use this installation method a lot where it makes sense, without too much worry. I also skip it other times.
Without knowing a bit more about your specific worries and for what kinds of threat you feel this technique is bad, it’s difficult to respond specifically.
Feel is fine, and if you’re uncomfortable with something, the answer is generally to either avoid it (by reading the script and executing the relevant commands yourself, or by skipping using this software altogether, for instance), or to understand why you’re uncomfortable and rationally assess whether that feeling is based on reality or imagination - or to which degree of each.
As usual, the real answer is - it depends.
Thank you for the nuanced answer!
You ask why I feel this is less secure: it seems the lowest possible bar when it comes to controlling what gets installed on your system. The script may or may not give you a choice as to where things get installed. It could refuse to install or silently overwrite stuff if something already exists. If install fails, it may or may not leave data behind, in directories I may or may not know about. It may or may not run a checksum on the downloaded data before installing. Because it’s a competely free-form script, there is no standard I can expect. For an application, I would read the documentation to learn more, but these scripts are not normally documented (other than “use this to install”). That uncertainty, to me, is insecure/unsafe.
Unpopular opinion, these are handy for quickly installing in a new vm or container (usually throwaway) where one don’t have to think much unless the script breaks. People don’t install thing on host or production multiple times, so anything installed there is usually vetted and most of the times from trusted sources like distro repos.
For normal threat model, it is not much different from downloading compiled binary from somewhere other than well trusted repos. Windows software ecosystem is famously infamous for exactly the same but it sticks around still.
Yeah and windows is famous for botnets lol.
Yet most botnets are Linux based.
In not talking about the CAC. I’m talking about the members of the botnet.
Or are you hinting at Linux based IoT devices?
IoT devices, web servers, etc.
This is just normal Linux poor security. Even giants like docker do this.
Docker doesn’t do this anymore. Their install script got moved to “only do this for testing”.
Use a convenience script. Only recommended for testing and development environments.
Now, their install page recommends packages/repos first, and then a manual install of the binaries second.
Well yeah … the native package manager. Has the bonus of the installed files being tracked.
And often official package maintainers are a lot more security conscious about how packages are built as well.
I agree.
On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be “never”.
The non-rolling distros can take a year to update a package, even if they decide to include it.
Honestly, it’s a mess, and I think we’re in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it’s just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.
As a software developer, I’d like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don’t know if they’re mutually exclusive objectives.
For my money, I’d go with pacman. It’s easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It’s fairly trivial for a distribution to “pin” releases so that users aren’t using a rolling upgrade.
Alpine’s is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it’s easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.
I’m about to go on a journey for contribution to Void, which I expect to be similar to Alpine.
Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.
Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it’s certainly an interesting and educational tool - but I’ve really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you’re forced to use it if you want to do any customization.
But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I’d be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software.
curl <URL> | sh
is, frankly, a really elegant, easy solution for software developers… of only it weren’t for the fact that the world is full of shitty, unethical people forcing us to distrust each other.It’s all sub-optimal, and needs a solution. I’m not convinced the various containerizations are the right direction; does “rg” really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what’s the solution for the vast majority of OSS software which are just little CLI or TUI tools?
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don’t seem to be gaining broad traction. I’m starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch’s huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it’s Nix, so I’d rather not.
As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.
The non-rolling distros can take a year to update a package, even if they decide to include it.
There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn’t the greatest.
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD
You swapped PKBUILD and APKBUILD 🙃
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
There is a reason why they do this.
Of course. It also prevents people from getting all improvements that aren’t security. It’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature. It’s fine if all you need is a box that’s going to run the same version of some software, sitting forgotten in a closet that gets walled in some day. IMO, it’s a crappy system for anything else.
You swapped PKBUILD and APKBUILD 🙃
I did! I’ve been trying to update packages in both, recently. The similarities are utterly frustrating, as they’re almost identical; the biggest difference between Alpine and Arch is the package process. If they were the same format - and they’re honestly so close it’s absurd - it’d make packager’s lives easier.
I may have mentioned I haven’t yet started Void, but I expect it to be similarly frustrating: so very, very similar.
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also updated those, which would defeat the whole purpose.
So you end up back with containers, or AppImages, Snap, or Flatpack. Although, of all of these, AppImages and podman are the most sane, since Snap and Flatpack are designed to manage system-level software, which isn’t much of am issue.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V. I’d include C, but the temptation to dynamically link is so ingrained in C - I rarely see really statically linked C projects.
t’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature
This is what they tell themselves. That they need that fix. So then developers get themselves unstable packages — but wait! If they update just one version further, then compatibility will with something broken, and that requires work to fix.
So what happens is they pin and/or vendor dependencies, and don’t update them, even for security updates. I find this quite concerning. For example, Rustdesk, a popular rust based remote desktop software. Here’s a quick audit of their libraries using cargo-audit:
[nix-shell:~/vscode/test/rustdesk]$ cargo-audit audit Fetching advisory database from `https://github.com/RustSec/advisory-db.git` Loaded 742 security advisories (from /home/moonpie/.cargo/advisory-db) Updating crates.io index warning: couldn't update crates.io index: registry: No such file or directory (os error 2) Scanning Cargo.lock for vulnerabilities (825 crate dependencies) Crate: idna Version: 0.5.0 Title: `idna` accepts Punycode labels that do not produce any non-ASCII when decoded Date: 2024-12-09 ID: RUSTSEC-2024-0421 URL: https://rustsec.org/advisories/RUSTSEC-2024-0421 Crate: libgit2-sys Version: 0.14.2+1.5.1 Title: Memory corruption, denial of service, and arbitrary code execution in libgit2 Date: 2024-02-06 ID: RUSTSEC-2024-0013 URL: https://rustsec.org/advisories/RUSTSEC-2024-0013 Severity: 8.6 (high) Solution: Upgrade to >=0.16.2 Crate: openssl Version: 0.10.68 Title: ssl::select_next_proto use after free Date: 2025-02-02 ID: RUSTSEC-2025-0004 URL: https://rustsec.org/advisories/RUSTSEC-2025-0004 Solution: Upgrade to >=0.10.70 Crate: protobuf Version: 3.5.0 Title: Crash due to uncontrolled recursion in protobuf crate Date: 2024-12-12 ID: RUSTSEC-2024-0437 URL: https://rustsec.org/advisories/RUSTSEC-2024-0437 Solution: Upgrade to >=3.7.2 Crate: ring Version: 0.17.8 Title: Some AES functions may panic when overflow checking is enabled. Date: 2025-03-06 ID: RUSTSEC-2025-0009 URL: https://rustsec.org/advisories/RUSTSEC-2025-0009 Solution: Upgrade to >=0.17.12 Crate: time Version: 0.1.45 Title: Potential segfault in the time crate Date: 2020-11-18 ID: RUSTSEC-2020-0071 URL: https://rustsec.org/advisories/RUSTSEC-2020-0071 Severity: 6.2 (medium) Solution: Upgrade to >=0.2.23 Crate: atk Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0413 URL: https://rustsec.org/advisories/RUSTSEC-2024-0413 Crate: atk-sys Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0416 URL: https://rustsec.org/advisories/RUSTSEC-2024-0416
I also checked rustscan and found similar issues.
I’ve pruned the dependency tree and some other unmaintained package issues, but some of these CVE’s are bad. Stuff like this is why I don’t trust developers to make packages, they get lazy and sloppy at the cost of security. On the other hand, stable release distributions inflict security upgrades on everybody, which is good.
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also
???. This is very incorrect. I don’t know where to start. If a package manager manages it’s own dependencies/libraries, like nix portable installs, or is a static binary (e.g: soar), then system installs will not interfere with the “user” package manager at all. You could also use something like launchd (mac) or systemd users services (linux) to update these packages with user level privileges, in the user’s home directory.
Also, I don’t know where you got the idea that flatpaks manage “system level” software.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V.
I dislike these because they commonly also come with version pinning and vendoring dependencies. But you should check out Soar and it’s repository. It also packages appimages, and “flatimages”, which seem to be similar to flatpaks but closer to appimages in distribution.
What’s stopping the downloaded script from wiping my home directory?
What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
You’re not wrong, but there’s an element of trust in anything like this and it’s all about your comfort level. How can you truly trust any code you didn’t write and complie yourself. Actually, how do you trust the compiler.
And let’s be honest, even if you trust my code implicitly (Hey, I’m a bofh, what could go wrong?) then that simply means that you’re trusting me not to do anything malicious to your system.
Even if your trust is well-placed in that regard, I don’t need to be malicious to wipe your system or introduce a configuation error that makes you vulnerable to others, it’s perfectly possible to do all that by just being incompetent. Or even being a normally competent person who was just having a bad day while writing the script you’re running now. Ooops.
This is the primary goal of distros these days.
Saved that, thank you.