Developers: I will never ever do that, no one should ever do that, and you should be ashamed for guiding people to. I get that you want to make things easy for end users, but at least exercise some bare minimum common sense.
The worst part is that bun
is just a single binary, so the install script is bloody pointless.
Bonus mildly infuriating is the mere existence of the .sh
TLD.
Edit b/c I’m not going to answer the same goddamned questions 100 times from people who blindly copy/paste the question from StackOverflow into their code/terminal:
WhY iS ThaT woRSe thAn jUst DoWnlOADing a BinAary???
- Downloading the compiled binary from the release page (if you don’t want to build yourself) has been a way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into
- There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
- Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
- The install script they’re telling you to pipe is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.
The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice.
I’ll do it if it’s hosted on Github and I can look at the code first but if it’s proprietary? Heck no
I saw many cases of this with windows PowerShell and those Window debloating scripts
I’m with you, OP. I’ll never blindly do that.
Also, to add to the reasons that’s bad:
- you can put restrictions on a single executable. setuid, SELinux, apparmor, etc.
- a simple compromise of a Web app altering a hosted text file can fuck you
- it sets the tone for users making them think executing arbitrary shell commands is safe
I recoil every time I see this. Most of the time I’ll inspect the shell script but often if they’re doing this, the scripts are convoluted as fuck to support a ton of different *nix systems. So it ends up burning a ton of time when I could’ve just downloaded and verified the executable and have been done with it already.
I wouldn’t call anyone who does this, a developer. No offense, but its a horrible practice, that usually come from hacky projects.
What’s that? A connection problem? Ah, it’s already running the part that it did get… Oops right on the boundary of
rm -rf /thing/that/got/cut/off
. I’m angry now. I expected the script maintainer to keep in mind that their script could be cut off at litterally any point… (Now what is thatset -e
the maintainer keeps yapping about?)Can you really expect maintainers to keep network error in mind when writing a Bash script?? I’ll just download your script first like I would your binary. Opening yourself up to more issues like this is just plain dumb.
I’m curious, op, do you think it’s bad to install tools this way in an automated fashion, such as when developing a composed docker image?
Protect from accidental data damage: for example the dev might have accidentally pushed an untested change where there’s a space in the path
rm -rf / ~/.thatappconfig/locatedinhome/nothin.config
a single typo that will wipe the whole drive instead of just the app config (yes, it happened, I remember clearly more a decade ago there was a commit on GitHub with lots of snarky comments on a script with such a typo)
Also: malicious developers that will befriend the honest dev in order to sneak an exploit.
Those scripts need to be universal, so there are hundreds of lines checking the Linux distro and what tools are installed, and ask the user to install them with a package manager. They require hours and hours of testing with multiple distros and they aren’t easy to understand too… isn’t it better to use that time to simply write a clear documentation how to install it?
Like: “this app requires to have x, y and z preinstalled. [Instructions to install said tools on various distros], then copy it in said subdirectory and create config in ~/.ofcourseinhome/”
It’s also easier for the user to uninstall it, as they can follow the steps in reverse
Yes I understand all of that, but also in the context of my docker containers I wouldn’t be losing any data that isn’t reproducible
Very much yes
You want to make your Dockerfile be as reproducible as possible. I would pull a specific commit from git and build from source. You can chain together containers in a single Dockerfile so that one container builds the software and the other deploys it.
I mean, you’re not op. But your method requires all updates to be manual, while some of us especially want updates to be as automated as possible.
You can use things like dependabot or renovate to update versions in a controlled manner, rather than automatically using the latest of everything.
On the other side, when it comes to docker containers, you can use github actions or some other CI/CD system to automate the container build.
I don’t think it is that hard to automate a container build. Ideally you should be using the official OCI image or some sort of package repo that was been properly secured.
That’s becoming alarmingly common, and I’d like to see it go away entirely.
Random question: do you happen to be downloading all of your Kindle books? 😜
Installing Rust: curl --proto ‘=https’ --tlsv1.2 -sSf https://sh.rustup.rs | sh (source)
Installing Homebrew: /bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)” (source)I understand that you find it infuriating, but it’s not something completely uncommon, even in high end projects :/
It should be uncommon
There is even a Windows (Powershell) example for Winutil:
Stable Branch (Recommended)
irm "https://christitus.com/win" | iex
Better than explaining how to make a .ps file trusted for execution (thankfully, one of the few executable file extensions that Windows doesn’t trust by default) but why not just use some basic .exe builder at this point?
Obligatory “they better make it a script that automatically creates a medium for silent Linux Mint installation, modifies the relevant BIOS settings and restarts” to prevent obvious snarky replies
Using a url that’s just some dude’s name makes this so much worse.
He’s reasonably trustworthy. I trust his utility more than Micro$oft but less than Linus Torvalds.
For rust at least, those are packaged in Debian and other distros too. I think rustup is in Debian Trixie too.
Don’t forget everyone’s favorite massgravel script
Don’t forget Pi-hole! It’s been the default install method since basically the beginning.
Yah, when I read this, I was like, pretty sure pi-hole started this as a popular option. I dig it though, so I guess OP and I are not on the same page. (I do usually look over the bash scripts before running them piped to bash, though.
Thankfully, I’m using the docker version, which everyone should use.
--proto ‘=https’ --tlsv1.2
That’s how you know they care, no MIMing that stuff without hijacking the CA at which point you have a whole another set of problems, and if you trust rustc to not delete your sources when they fail a typecheck, then you can trust their installer.
-f
is important to not execute half-downloaded scripts on failure,-s
and-S
are verbosity options,-L
follow redirects.Common or not, it’s still fucking awful and the people who promote this nonsense should be ashamed of themselves.
I assume your concern is with security, so then whats the difference between running the install script from the internet and downloading a binary from the internet and running it?
To add to OP’s concerns, the server can detect if you run
curl <URL> | sh
rather than just downloading the file, and deliver a malicious payload only in the piped to sh case where no one is viewing itYou’re already installing a binary from them, the trust on both the authors and the delivery method is already there.
If you don’t trust, then don’t install their binaries.
You aren’t just trusting the authors though. You’re trusting that no other step in the chain has been tampered with or compromised somehow.
See post edit. I’ve already answered that twice.
What’s a good package manager right now for stuff like this if i don’t want to use the distro package manager though? I want up to date versions of these tools, ideally shipped by the devs themselves, with easy removal and updates. Is there any right now? I think Homebrew is like that? But I wish it didn’t need creating an entire new user and worked on a user account basis.
In an ideal world, i would want to use these tools in such a way that I can uninstall them, including any tool data (cache, config, etc), and update them in a reliable manner. Most of these tools are also hellbent on creating a new “.<tool-name>” folder or file in the home folder ignoring the XDG spec.
It says in the comment of the script:
npm install
npm is JS-specific
if i don’t want to use the distro package manager
I’m stunned you don’t understand why this is a problem.
This was absolutely trivial stuff before the great Y2K layoffs, so if you can’t figure it out, ask someone who was releasing software professionally back then.
And please, if you learn something from this, try to help others.
I don’t want to use a distro package manager for certain software because nearly every distro except Arch requires adding third party repositories which can stop getting updates at any second.
Don’t worry, I understand the intricacies of these problems a lot more deeply than you probably realise. As a developer, it can suck when your “hotfix” cools down by the time a distro gets around to packaging it. And as a packager, you’re human in the end. As a user though, you just want stuff to work.
As a longtime Linux user, this isn’t really a problem for me, none of this is. But what about a new user? We need to address these issues at some point if we want Linux to be truly user-friendly.
Nix. I use it for everything, including all of my tools I use on my work MacBook.
There are many ways to use nix for this stuff, but personally I use home-manager in a flake-based setup. Versions of tools are all pinned in a lockfile which is committed to source control, so it’s easy to get my config and all my tools on a new machine without any breakage (it does require installing first, though).
It’s a great tool and has largely solved the pain of dealing with having to work on MacOS, for me.
Nix is a great suggestion and I think i will be using it moving forward as well. Thanks. Ideally I want to use NixOS, do you know if secure boot is still a pain point with NixOS?
tbf, every time you’re installing basically anything at all, you basically trust whoever hosts the stuff that they don’t temper with it. you’re already putting a lot of faith out there, and i’m sure a lot of the software actually contains crypto-mineware or something else.
Would you prefere
$ curl xyz $ chmod +x xyz $ ./xyz
?
You can detect server-side whether curl is piping the script to Bash and running it vs just downloading it, and inject malicious code only in the case no one is viewing it
https://github.com/Stijn-K/curlbash_detect
So that would at least be a minor improvement
In most cases the script already installs a pre-compiled binary that can be anything, they wouldn’t need to make the script itself malicious if they were bad actors.
I mean, how about:
- Download the release for your arch from the releases page.
- Extract to
~/.local/bin
- Run
I think you missed the point.
Why is that safer/better? That binary can do anything a shell script can, and it’s a lot harder to inspect.
- That’s been the way to acquire software since shortly after the dawn of time. You already know what you’re getting yourself into.
- There are SHA256 checksums of each binary file available in each release on Github. You can confirm the binary was not tampered with by comparing a locally computed checksum to the value in the release’s checksums file.
- Binaries can also be signed (not that signing keys have never leaked, but it’s still one step in the chain of trust)
- The install script is not hosted on Github. A misconfigured / compromised server can allow a bad actor to tamper with the install script that gets piped directly into your shell. The domain could also lapse and be re-registered by a bad actor to point to a malicious script. Really, there’s lots of things that can go wrong with that.
The point is that it is bad practice to just pipe a script to be directly executed in your shell. Developers should not normalize that bad practice
If you trust them enough to use their binary, why don’t you trust them enough to run their install scripts as well?
Trust and security aren’t just about protecting from malice, but also mistakes.
For example, AUR packages are basically install scripts, and there have been a few that have done crazy things like delete a users /bin — not out of any malice, but rather simple human error.
Binaries are going to be much, much less prone to these mistakes because they are in languages the creators have more experience with, and are comfortable in. Just because I trust someone to write code that runs on my computer, doesn’t mean I trust them to write an install script, especially given how many footguns bash has.
How do you know the script hasnt been compromised? Is every user competent enough to evaluate it to ensure its safe to run?
Using package managers to handle this provides a couple things: First: most package manager have builtin mechanisms to ensure the binary is unmodified Second: they provide a third party validating them.
How do you know the script hasnt been compromised?
You don’t, same as you don’t know if the binary has been compromised, just like when a npm package deleted files for russian users. I get that running scripts from the internet without looking at them first to understand what they do is not secure, but downloading and running anything from the internet is coupled with some amount of risk. How do you know that you won’t be mining crypto currency in addition to the original purpose of the binary? You don’t unless you read the source code.
It all comes down to if you trust the provider or not. Personally, if I trust them enough to run binary files on my computer, I trust them enough to use their scripts for installation. I don’t agree that something is more unsafe just because it is a script.
package manager
Not everything is provided with a package manager, and not everything is up to update with the OS provided package manager. I agree that one should ideally use a package manager with third party validation if that is an option.
- no one is talking about NPM libraries. we’re talking about released packages.
- you absolutely can ensure a binary hasnt been tampered with. its called checksumming.
- you’re confusing MITM attacks with supply chain attacks. MITM attacks are far easier to pull off.
Not everything is provided with a package manager
Yes. thats precisely the problem we’re pointing out to you. if you’re going to provide software over the internet provide a proper package with checksum validation. its not hard, stop providing bash scripts.
I’m gonna go out on a limb and say you find this more than mildly infuriating.
I think you and a lot of others are late to the idea that mildly is kinda like a joke. Many things are majorly infuriating. On the reddit, many of their top posts aren’t even major. They’re catastrophic, just absurd. I’ve yet to find anything mild
You really should use some sort of package manager that has resistance against supply chain attacks. (Think Linux distros)
You probably aren’t going to get yourself in trouble by downloading some binary from Github but keep in mind Github has been used for Malware in the past.
It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.
On the flip side, you can also just download the script from the site without piping it directly to bash if you want to review what it’s going to do before you run it.
Would have been much better if they just pasted the (probably quite short) script into the readme so that I can just paste it into my terminal. I have no issue running commands I can have a quick look at.
I would never blindly pipe a script to be executed on my machine though. That’s just next level “asking to get pwned”.
These scripts are usually longer than that and do some checking of which distro you are running before doing something distro-specific.
Doing something distro-specific in an install script for a single binary seems a bit overcomplicated to me, and definitely not something I want to blindly pipe into my shell.
The bun install script in this post determines what platform you’re on, defines a bunch of logging convenience functions, downloads the latest bun release zip file from GitHub, extracts and manually places the binary in the right spot, then determines what shell you’re using and installs autocompletion scripts.
Like, c’mon. That’s a shitload of unnecessary stuff to ask the user to blindly pipe into their shell, all of which could be avoided by putting a couple sentences into a readme. Bare minimum, that script should just be checked into their git repo and documented in their Readme/user docs, but they shouldn’t encourage anyone to pipe it into their shell.
It’s bad practice to do it, but it makes it especially easy for end users who already trust both the source and the script.
You’re not wrong but this is what lead to the xz “hack” not to long ago. When it comes to data, trust is a fickle mistress.