curl https://some-url | sh

I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?

I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don’t we have something better than “sh” for this? Something with less power to do harm?

    • Possibly linux
      link
      fedilink
      English
      74 months ago

      Download it and then read it. Curl has a different user agent than web browsers.

      • billwashere
        link
        fedilink
        English
        54 months ago

        Yeah I guess if they were being especially nefarious they could supply two different scripts based on user-agent. But I meant what you said anyways… :) I download and then read through the script. I know this is a common thing and people are wary of doing it, but has anyone ever heard of there being something disreputable in one of this scripts? I personally haven’t yet.

        • Possibly linux
          link
          fedilink
          English
          14 months ago

          I’ve seen it many times. It usually takes the form of fake websites that are impersonating the real thing. It is easy to manipulate Google results. Also, there have been a few cases where a bad design and a typo result in data loss.

  • @[email protected]
    link
    fedilink
    English
    24 months ago

    What does curl even do? Unstraighten? Seems like any other command I’d blindly paste from an internet thread into a terminal window to try to get something on Linux to work.

    • irelephant [he/him]🍭
      link
      fedilink
      English
      34 months ago

      curl sends requests, curl lemmy.world would return the html of lemmy.worlds homepage. piping it into bash means that you are fetching a shell script, and running it.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        4 months ago

        I think he knows but is commenting on the pathetic state of security culture on Linux. (“Linux is secure so I can do anything without concerns”)

        • irelephant [he/him]🍭
          link
          fedilink
          English
          14 months ago

          Security through obsecurity strikes again.

          I usually just read the shell script, and then paste that into bash.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        Why would they call it that when it’s not a client for all urls? It’s more like httpc

        • IngeniousRocks (They/She)
          link
          fedilink
          14 months ago

          What URLs is it not a client for? As far as I understand it will pull whatever data is presented by whatever URL. cURL doesn’t really care about protocol being http, you can use it with FTP as well, and I haven’t tested it yet but now that I’m curious I wanna see if it works for SMB

          • @[email protected]
            link
            fedilink
            English
            14 months ago

            I’m not arguing it should, but an easy example of a scheme it doesn’t support is mailto. However I was surprised at the list it does support including mqtt, imap, and pop3.

  • @[email protected]
    link
    fedilink
    24 months ago

    I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written.

    So you are concerned with security, but you understand that there aren’t actually any security concerns… and actually you’re worried about coding mistakes in shitty Bash?

  • @[email protected]
    link
    fedilink
    164 months ago

    For security reasons, I review every line of code before it’s executed on my machine.

    Before I die, I hope to take my ‘93 dell optiplex out of its box and finally see what this whole internet thing is about.

  • @[email protected]
    link
    fedilink
    24 months ago

    I also feel incredibly uncomfortable with this. Ultimately it comes down to if you trust the application or not. If you do then this isn’t really a problem as regardless they’re getting code execution on your machine. If you don’t, well then don’t install the application. In general I don’t like installing applications that aren’t from my distro’s official repositories but mostly because I like knowing at least they trust it and think it’s safe, as opposed to any software that isn’t which is more of an unknown.

    Also it’s unlikely for the script to be malicious if the application is not. Further, I’m not sure a manual install really protects anyone from anything. Inexperienced users will go through great lengths and jump through some impressive hoops to try and make something work, to their own detriment sometimes. My favorite example of this is the LTT Linux challenge. apt did EVERYTHING it could think to do to alert that the steam package was broken and he probably didn’t want to install it, and instead of reading the error he just blindly typed out the confirmation statement. Nothing will save a user from ruining their system if they’re bound and determined to do something.

    • Scary le Poo
      link
      fedilink
      24 months ago

      In this case apt should have failed gracefully. There is no reason for it to continue if a package is broken. If you want to force a broken package, that can be it’s own argument.

      • @[email protected]
        link
        fedilink
        24 months ago

        I’m not sure that would’ve made a difference. It already makes you go out of your way to force a broken package. This has been discussed in places before but the simple fact of the matter is a user that doesn’t understand what they’re doing will perservere. Putting up barriers is a good thing to do to protect users, spending all your time and effort to cover every edge case is a waste of time because users will find ways to shoot themselves in the foot.

  • Possibly linux
    link
    fedilink
    English
    34 months ago

    Just use a VM or container for installing software. It can go horribly wrong in a isolated place.

  • @[email protected]
    link
    fedilink
    14
    edit-2
    4 months ago

    It isn’t more dangerous than running a binary downloaded from them by any other means. It isn’t more dangerous than downloaded installer programs common with Windows.

    TBH macOS has had the more secure idea of by default using sandboxes applications downloaded directly without any sort of installer. Linux is starting to head in that direction now with things like Flatpak.

  • @[email protected]
    link
    fedilink
    English
    18
    edit-2
    4 months ago

    The security concerns are often overblown. The bigger problem for me is I don’t know what kind of mess it’s going to make or whether I can undo it. If it’s a .deb or even a tarball to extract in /usr/local then I know how to uninstall.

    I will still use them sometimes but for things I know and understand - e.g. rustup will put things in ~/.rustup and update the PATH in my shell profile and because I know that’s what it does I’m happy to use the automation on a new system.

      • @[email protected]
        link
        fedilink
        English
        11
        edit-2
        4 months ago

        So tell me: if I download and run a bash script over https, or a .deb file over https and then install it, why is the former a “security nightmare” and the latter not?

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          For example: A compromised host could detect whether you are downloading the script or piping it.

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          Both are a security nightmare, if you’re not verifying the signature.

          You should verify the signature of all things you download before running it. Be it a bash script or a .deb file or a .AppImage or to-be-compiled sourcecode.

          Best thing is to just use your Repo’s package manager. Apt will not run anything that isn’t properly signed by a package team members release PGP key.

          • @[email protected]
            link
            fedilink
            English
            24 months ago

            I have to assume that we’re in this situation because because the app does not exist in our distro’s repo (or homebrew or whatever else). So how do you go about this verification? You need a trusted public key, right? You wouldn’t happen to be downloading that from the same website that you’re worried might be sending you compromised scripts or binaries? You wouldn’t happen to be downloading the key from a public keyserver and assuming it belongs to the person whose name is on it?

            This is such a ridiculously high bar to avert a “security nightmare”. Regular users will be better off ignoring such esoteric suggestions and just looking for lots of stars on GitHub.

          • @[email protected]
            link
            fedilink
            14 months ago

            Hilarious, but not a security issue. Just shitty Bash coding.

            And I agree it’s easier to make these mistakes in Bash, but I don’t think anyone here is really making the argument that curl | bash is bad because Bash is a shitty error-prone language (it is).

            Definitely the most valid point I’ve read in this thread though. I wish we had a viable alternative. Maybe the Linux community could work on that instead of moaning about it.

            • @[email protected]
              link
              fedilink
              English
              14 months ago

              Hilarious, but not a security issue. Just shitty Bash coding.

              It absolutely is a security issue. I had a little brain fart, but what I meant to say was “Security isn’t just protection from malice, but also protection from mistakes”.

              Let’s put it differently:

              Hilarious, but not a security issue. Just shitty C coding.

              This is a common sentiment people say about C, and I have a the same opinion about it. I would rather we use systems in place that don’t give people the opportunity to make mistakes.

              I wish we had a viable alternative. Maybe the Linux community could work on that instead of moaning about it.

              Viable alternative for what? Packaging.

              I personally quite like the systems we have. The “install anything from the internet” is exactly how Windows ends up with so much malware. The best way to package software for users is via a package manager, that not only puts more eyes on the software, but many package managers also have built in functionality that makes the process more reliable and secure. For example signatures create a chain of trust. I really like Nix as a distro-agnostic package manager, because due to the unique way they do things, it’s impossible for one package’s build process to interfere with another.

              If you want to do “install anything from the internet” it’s best to do it with containers and sandboxing. Docker/podman for services, and Flatpak for desktop apps, where it’s pretty easy to publish to flathub. Both also seem to be pretty easy, and pretty popular — I commonly find niche things I look at ship a docker image.

              • @[email protected]
                link
                fedilink
                14 months ago

                This is a common sentiment people say about C, and I have a the same opinion about it. I would rather we use systems in place that don’t give people the opportunity to make mistakes.

                The issue with C is it lets you make mistakes that commonly lead to security vulnerabilities - allowing a malicious third party to do bad stuff.

                The Bash examples you linked are not security vulnerabilities. They don’t let malicious third parties do anything. They done have CVEs, they’re just straight up data loss bugs. Bad ones, sure. (And I fully support not using Bash where feasible.)

                Viable alternative for what? Packaging.

                A viable way to install something that works on all Linux distros (and Mac!), and doesn’t require root.

                The reason people use curl | bash is precisely so they don’t have to faff around making a gazillion packages. That’s not a good answer.

                • @[email protected]
                  link
                  fedilink
                  English
                  1
                  edit-2
                  4 months ago

                  A viable way to install something that works on all Linux distros (and Mac!), and doesn’t require root.

                  Nix portable installations, Soar.

                  The reason people use curl | bash is precisely so they don’t have to faff around making a gazillion packages.

                  Developers shouldn’t be making packages. They do things like vendor and pin dependencies, which lead to security and stability issues later down the line. See my other comment where I do a quick look at some of these issues.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          4 months ago

          By definition nothing

          The point you appear to be making is “everything is insecure so nothing is” and the point others are making is “everything is insecure so everything is”

          • @[email protected]
            link
            fedilink
            14 months ago

            No, the point I am making is there are no additional security implications from executing a Bash script that someone sends you over executing a binary that they send you. I don’t know how to make that clearer.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          You’re telling me that you dont verify the signatures of the binaries you download before running them too?!? God help you.

          I download my binaries with apt, which will refuse to install the binary if the signature doesn’t match.

          • @[email protected]
            link
            fedilink
            14 months ago

            No because there’s very little point. Checking signatures only makes sense if the signatures are distributed in a more secure channel than the actual software. Basically the only time that happens is when software is distributed via untrusted mirror services.

            Most software I install via curl | bash is first-party hosted and signatures don’t add any security.

            • @[email protected]
              link
              fedilink
              English
              24 months ago

              All publishing infrastructure shouldn’t be trusted. Theres countless historical examples of this.

              Use crypto. It works.

              • @[email protected]
                link
                fedilink
                14 months ago

                Crypto is used. It is called TLS.

                You have to have some trust of publishing infrastructure, otherwise how do you know your signatures are correct?

                • @[email protected]
                  link
                  fedilink
                  English
                  14 months ago

                  TLS is a joke because of X.509.

                  We dont need to trust any publishing infrastructure because the PGP private keys don’t live on the publishing infrastructure. We solved this issue in the 90s

  • Lucy :3
    link
    fedilink
    154 months ago

    Well yeah … the native package manager. Has the bonus of the installed files being tracked.

    • I agree.

      On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be “never”.

      The non-rolling distros can take a year to update a package, even if they decide to include it.

      Honestly, it’s a mess, and I think we’re in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it’s just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.

      As a software developer, I’d like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don’t know if they’re mutually exclusive objectives.

      For my money, I’d go with pacman. It’s easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It’s fairly trivial for a distribution to “pin” releases so that users aren’t using a rolling upgrade.

      Alpine’s is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it’s easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.

      I’m about to go on a journey for contribution to Void, which I expect to be similar to Alpine.

      Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.

      Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it’s certainly an interesting and educational tool - but I’ve really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you’re forced to use it if you want to do any customization.

      But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I’d be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software. curl <URL> | sh is, frankly, a really elegant, easy solution for software developers… of only it weren’t for the fact that the world is full of shitty, unethical people forcing us to distrust each other.

      It’s all sub-optimal, and needs a solution. I’m not convinced the various containerizations are the right direction; does “rg” really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what’s the solution for the vast majority of OSS software which are just little CLI or TUI tools?

      Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don’t seem to be gaining broad traction. I’m starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch’s huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it’s Nix, so I’d rather not.

      • @[email protected]
        link
        fedilink
        English
        24 months ago

        The non-rolling distros can take a year to update a package, even if they decide to include it.

        There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn’t the greatest.

        Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD

        You swapped PKBUILD and APKBUILD 🙃

        I’m starting to think something like a yay that installs into $HOME.

        Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory

        • There is a reason why they do this.

          Of course. It also prevents people from getting all improvements that aren’t security. It’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature. It’s fine if all you need is a box that’s going to run the same version of some software, sitting forgotten in a closet that gets walled in some day. IMO, it’s a crappy system for anything else.

          You swapped PKBUILD and APKBUILD 🙃

          I did! I’ve been trying to update packages in both, recently. The similarities are utterly frustrating, as they’re almost identical; the biggest difference between Alpine and Arch is the package process. If they were the same format - and they’re honestly so close it’s absurd - it’d make packager’s lives easier.

          I may have mentioned I haven’t yet started Void, but I expect it to be similarly frustrating: so very, very similar.

          I’m starting to think something like a yay that installs into $HOME.

          Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory

          Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also updated those, which would defeat the whole purpose.

          So you end up back with containers, or AppImages, Snap, or Flatpack. Although, of all of these, AppImages and podman are the most sane, since Snap and Flatpack are designed to manage system-level software, which isn’t much of am issue.

          It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V. I’d include C, but the temptation to dynamically link is so ingrained in C - I rarely see really statically linked C projects.

          • @[email protected]
            link
            fedilink
            English
            14 months ago

            t’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature

            This is what they tell themselves. That they need that fix. So then developers get themselves unstable packages — but wait! If they update just one version further, then compatibility will with something broken, and that requires work to fix.

            So what happens is they pin and/or vendor dependencies, and don’t update them, even for security updates. I find this quite concerning. For example, Rustdesk, a popular rust based remote desktop software. Here’s a quick audit of their libraries using cargo-audit:

            [nix-shell:~/vscode/test/rustdesk]$ cargo-audit audit
                Fetching advisory database from `https://github.com/RustSec/advisory-db.git`
                  Loaded 742 security advisories (from /home/moonpie/.cargo/advisory-db)
                Updating crates.io index
            warning: couldn't update crates.io index: registry: No such file or directory (os error 2)
                Scanning Cargo.lock for vulnerabilities (825 crate dependencies)
            Crate:     idna
            Version:   0.5.0
            Title:     `idna` accepts Punycode labels that do not produce any non-ASCII when decoded
            Date:      2024-12-09
            ID:        RUSTSEC-2024-0421
            URL:       https://rustsec.org/advisories/RUSTSEC-2024-0421
            
            Crate:     libgit2-sys
            Version:   0.14.2+1.5.1
            Title:     Memory corruption, denial of service, and arbitrary code execution in libgit2
            Date:      2024-02-06
            ID:        RUSTSEC-2024-0013
            URL:       https://rustsec.org/advisories/RUSTSEC-2024-0013
            Severity:  8.6 (high)
            Solution:  Upgrade to >=0.16.2
            
            Crate:     openssl
            Version:   0.10.68
            Title:     ssl::select_next_proto use after free
            Date:      2025-02-02
            ID:        RUSTSEC-2025-0004
            URL:       https://rustsec.org/advisories/RUSTSEC-2025-0004
            Solution:  Upgrade to >=0.10.70
            
            Crate:     protobuf
            Version:   3.5.0
            Title:     Crash due to uncontrolled recursion in protobuf crate
            Date:      2024-12-12
            ID:        RUSTSEC-2024-0437
            URL:       https://rustsec.org/advisories/RUSTSEC-2024-0437
            Solution:  Upgrade to >=3.7.2
            
            Crate:     ring
            Version:   0.17.8
            Title:     Some AES functions may panic when overflow checking is enabled.
            Date:      2025-03-06
            ID:        RUSTSEC-2025-0009
            URL:       https://rustsec.org/advisories/RUSTSEC-2025-0009
            Solution:  Upgrade to >=0.17.12
            
            Crate:     time
            Version:   0.1.45
            Title:     Potential segfault in the time crate
            Date:      2020-11-18
            ID:        RUSTSEC-2020-0071
            URL:       https://rustsec.org/advisories/RUSTSEC-2020-0071
            Severity:  6.2 (medium)
            Solution:  Upgrade to >=0.2.23
            
            Crate:     atk
            Version:   0.18.0
            Warning:   unmaintained
            Title:     gtk-rs GTK3 bindings - no longer maintained
            Date:      2024-03-04
            ID:        RUSTSEC-2024-0413
            URL:       https://rustsec.org/advisories/RUSTSEC-2024-0413
            
            
            Crate:     atk-sys
            Version:   0.18.0
            Warning:   unmaintained
            Title:     gtk-rs GTK3 bindings - no longer maintained
            Date:      2024-03-04
            ID:        RUSTSEC-2024-0416
            URL:       https://rustsec.org/advisories/RUSTSEC-2024-0416
            
            

            I also checked rustscan and found similar issues.

            I’ve pruned the dependency tree and some other unmaintained package issues, but some of these CVE’s are bad. Stuff like this is why I don’t trust developers to make packages, they get lazy and sloppy at the cost of security. On the other hand, stable release distributions inflict security upgrades on everybody, which is good.

            Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also

            ???. This is very incorrect. I don’t know where to start. If a package manager manages it’s own dependencies/libraries, like nix portable installs, or is a static binary (e.g: soar), then system installs will not interfere with the “user” package manager at all. You could also use something like launchd (mac) or systemd users services (linux) to update these packages with user level privileges, in the user’s home directory.

            Also, I don’t know where you got the idea that flatpaks manage “system level” software.

            It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V.

            I dislike these because they commonly also come with version pinning and vendoring dependencies. But you should check out Soar and it’s repository. It also packages appimages, and “flatimages”, which seem to be similar to flatpaks but closer to appimages in distribution.

      • Lucy :3
        link
        fedilink
        24 months ago

        As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.

    • John Richard
      link
      fedilink
      English
      104 months ago

      And often official package maintainers are a lot more security conscious about how packages are built as well.