curl https://some-url | sh
I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don’t we have something better than “sh” for this? Something with less power to do harm?
Just use a VM or container for installing software. It can go horribly wrong in a isolated place.
I also feel incredibly uncomfortable with this. Ultimately it comes down to if you trust the application or not. If you do then this isn’t really a problem as regardless they’re getting code execution on your machine. If you don’t, well then don’t install the application. In general I don’t like installing applications that aren’t from my distro’s official repositories but mostly because I like knowing at least they trust it and think it’s safe, as opposed to any software that isn’t which is more of an unknown.
Also it’s unlikely for the script to be malicious if the application is not. Further, I’m not sure a manual install really protects anyone from anything. Inexperienced users will go through great lengths and jump through some impressive hoops to try and make something work, to their own detriment sometimes. My favorite example of this is the LTT Linux challenge. apt did EVERYTHING it could think to do to alert that the steam package was broken and he probably didn’t want to install it, and instead of reading the error he just blindly typed out the confirmation statement. Nothing will save a user from ruining their system if they’re bound and determined to do something.
In this case apt should have failed gracefully. There is no reason for it to continue if a package is broken. If you want to force a broken package, that can be it’s own argument.
I’m not sure that would’ve made a difference. It already makes you go out of your way to force a broken package. This has been discussed in places before but the simple fact of the matter is a user that doesn’t understand what they’re doing will perservere. Putting up barriers is a good thing to do to protect users, spending all your time and effort to cover every edge case is a waste of time because users will find ways to shoot themselves in the foot.
This is just normal Linux poor security. Even giants like docker do this.
Docker doesn’t do this anymore. Their install script got moved to “only do this for testing”.
Use a convenience script. Only recommended for testing and development environments.
Now, their install page recommends packages/repos first, and then a manual install of the binaries second.
So basically the install instructions for Lemmy? No Lemmy data is safe.
I dont just cringe, I open a bug report. You can be the change to fix this.
Can we also open bug reports for open-source projects that base their community on Discord?
Yes.
One of the few worthwhile comments on Lemmy…
Yeah I hate this stuff too, I usually pipe it into a file figure out what it’s doing and manually install the program from there.
FWIW I’ve never found anything malicious from these scripts but my internal dialogue starts screaming when I see these in the wild, I don’t want to run some script and not know what it’s touching malicious or not it’s a PITA.
As a linux user, I like to know what’s happening under the hood as best I can and these scripts go against that
If you’re worried, download it into a file first and read it.
No
@cschreib
No.Well yeah … the native package manager. Has the bonus of the installed files being tracked.
And often official package maintainers are a lot more security conscious about how packages are built as well.
I agree.
On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be “never”.
The non-rolling distros can take a year to update a package, even if they decide to include it.
Honestly, it’s a mess, and I think we’re in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it’s just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.
As a software developer, I’d like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don’t know if they’re mutually exclusive objectives.
For my money, I’d go with pacman. It’s easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It’s fairly trivial for a distribution to “pin” releases so that users aren’t using a rolling upgrade.
Alpine’s is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it’s easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.
I’m about to go on a journey for contribution to Void, which I expect to be similar to Alpine.
Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.
Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it’s certainly an interesting and educational tool - but I’ve really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you’re forced to use it if you want to do any customization.
But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I’d be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software.
curl <URL> | sh
is, frankly, a really elegant, easy solution for software developers… of only it weren’t for the fact that the world is full of shitty, unethical people forcing us to distrust each other.It’s all sub-optimal, and needs a solution. I’m not convinced the various containerizations are the right direction; does “rg” really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what’s the solution for the vast majority of OSS software which are just little CLI or TUI tools?
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don’t seem to be gaining broad traction. I’m starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch’s huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it’s Nix, so I’d rather not.
The non-rolling distros can take a year to update a package, even if they decide to include it.
There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn’t the greatest.
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD
You swapped PKBUILD and APKBUILD 🙃
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
There is a reason why they do this.
Of course. It also prevents people from getting all improvements that aren’t security. It’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature. It’s fine if all you need is a box that’s going to run the same version of some software, sitting forgotten in a closet that gets walled in some day. IMO, it’s a crappy system for anything else.
You swapped PKBUILD and APKBUILD 🙃
I did! I’ve been trying to update packages in both, recently. The similarities are utterly frustrating, as they’re almost identical; the biggest difference between Alpine and Arch is the package process. If they were the same format - and they’re honestly so close it’s absurd - it’d make packager’s lives easier.
I may have mentioned I haven’t yet started Void, but I expect it to be similarly frustrating: so very, very similar.
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also updated those, which would defeat the whole purpose.
So you end up back with containers, or AppImages, Snap, or Flatpack. Although, of all of these, AppImages and podman are the most sane, since Snap and Flatpack are designed to manage system-level software, which isn’t much of am issue.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V. I’d include C, but the temptation to dynamically link is so ingrained in C - I rarely see really statically linked C projects.
t’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature
This is what they tell themselves. That they need that fix. So then developers get themselves unstable packages — but wait! If they update just one version further, then compatibility will with something broken, and that requires work to fix.
So what happens is they pin and/or vendor dependencies, and don’t update them, even for security updates. I find this quite concerning. For example, Rustdesk, a popular rust based remote desktop software. Here’s a quick audit of their libraries using cargo-audit:
[nix-shell:~/vscode/test/rustdesk]$ cargo-audit audit Fetching advisory database from `https://github.com/RustSec/advisory-db.git` Loaded 742 security advisories (from /home/moonpie/.cargo/advisory-db) Updating crates.io index warning: couldn't update crates.io index: registry: No such file or directory (os error 2) Scanning Cargo.lock for vulnerabilities (825 crate dependencies) Crate: idna Version: 0.5.0 Title: `idna` accepts Punycode labels that do not produce any non-ASCII when decoded Date: 2024-12-09 ID: RUSTSEC-2024-0421 URL: https://rustsec.org/advisories/RUSTSEC-2024-0421 Crate: libgit2-sys Version: 0.14.2+1.5.1 Title: Memory corruption, denial of service, and arbitrary code execution in libgit2 Date: 2024-02-06 ID: RUSTSEC-2024-0013 URL: https://rustsec.org/advisories/RUSTSEC-2024-0013 Severity: 8.6 (high) Solution: Upgrade to >=0.16.2 Crate: openssl Version: 0.10.68 Title: ssl::select_next_proto use after free Date: 2025-02-02 ID: RUSTSEC-2025-0004 URL: https://rustsec.org/advisories/RUSTSEC-2025-0004 Solution: Upgrade to >=0.10.70 Crate: protobuf Version: 3.5.0 Title: Crash due to uncontrolled recursion in protobuf crate Date: 2024-12-12 ID: RUSTSEC-2024-0437 URL: https://rustsec.org/advisories/RUSTSEC-2024-0437 Solution: Upgrade to >=3.7.2 Crate: ring Version: 0.17.8 Title: Some AES functions may panic when overflow checking is enabled. Date: 2025-03-06 ID: RUSTSEC-2025-0009 URL: https://rustsec.org/advisories/RUSTSEC-2025-0009 Solution: Upgrade to >=0.17.12 Crate: time Version: 0.1.45 Title: Potential segfault in the time crate Date: 2020-11-18 ID: RUSTSEC-2020-0071 URL: https://rustsec.org/advisories/RUSTSEC-2020-0071 Severity: 6.2 (medium) Solution: Upgrade to >=0.2.23 Crate: atk Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0413 URL: https://rustsec.org/advisories/RUSTSEC-2024-0413 Crate: atk-sys Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0416 URL: https://rustsec.org/advisories/RUSTSEC-2024-0416
I also checked rustscan and found similar issues.
I’ve pruned the dependency tree and some other unmaintained package issues, but some of these CVE’s are bad. Stuff like this is why I don’t trust developers to make packages, they get lazy and sloppy at the cost of security. On the other hand, stable release distributions inflict security upgrades on everybody, which is good.
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also
???. This is very incorrect. I don’t know where to start. If a package manager manages it’s own dependencies/libraries, like nix portable installs, or is a static binary (e.g: soar), then system installs will not interfere with the “user” package manager at all. You could also use something like launchd (mac) or systemd users services (linux) to update these packages with user level privileges, in the user’s home directory.
Also, I don’t know where you got the idea that flatpaks manage “system level” software.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V.
I dislike these because they commonly also come with version pinning and vendoring dependencies. But you should check out Soar and it’s repository. It also packages appimages, and “flatimages”, which seem to be similar to flatpaks but closer to appimages in distribution.
As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.
This is simpler than the download, ./configure, make, make install steps we had some decades ago, but not all that different in that you wind up with arbitrary, unmanaged stuff.
Preferably use the distro native packages, or else their build system if it’s easily available (e.g. AUR in Arch)
Back up your data folks. You’re probably more likely to accidentally
rm -rf
yourself than download a script that will do it.To be fair that’s because Linux funnels you to the safeguard-free terminal where it’s much harder to visualize what’s going on and fewer checks to make sure you’re doing what you mean to be doing. I know it’s been a trend for a long time where software devs think they are immune from mistakes but…they aren’t. And nor is anyone else.
It’s convenience over security, something that creeps in anywhere there is popularity. For those who just want x or y to work without needing to spend their day in the terminal - they’re great.
You’d expect these kinds of script to be well tested against their targets and for the user to have/identify the correct target. Their sources should at least point out the security issue and advise to grab and inspect before straight up piping it though. Some I have seen do this.
Running them like this means you put 100% trust in the author, the source and your DNS. Not a big ask for some. Unthinkable for others.
I always try to avoid these, unless the application I’m installing has it’s own package management functionality, like Rustup or Nix. Everything else should be handled by the system package manager.
Ironically, it is rustup that triggered me with this most recently… https://www.rust-lang.org/tools/install
Use containers for installing things.
It is sandboxed and controllable by you
And don’t forget to
sudo
!