Ref: https://www.rust-lang.org/tools/install
>Using rustup (Recommended)
>It looks like you’re running macOS, Linux, or another Unix-like OS. To download Rustup and install Rust, run the following in your terminal, then follow the on-screen instructions. See "Other Installation Methods" if you are on Windows.
>curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Are you saying that if you avoid the curl bash pattern then you can avoid the curl bash pattern? This is true, and trivial, and completely irrelevant to what the rust website recommends and what most people do.
There's definitely been a misunderstanding. The misunderstanding is that you think people are installing rust from rustup from their repos. The website shows you this is not the most common case.
I do get your point that it doesn't have to be this way anymore. That rustup itself could be in repos and still work (even rustc/etc can't). But this is not not how it has been done for rust's entire existence and change is slow and hard. Is there a single distro that does do this now?
So surely you acknowledge that rustup not being in any given distro's repo isn't something that the Rust developers have control over? How do you expect the Rust devs to distribute the compiler? If you want to build from source, that's extremely easy. For people who want convenient binaries, Rust also offers binaries via the most convenient means available, which is curl-to-bash. This isn't a security flaw any more than running the compiler itself is.
The Rust docs should really offer installation methods other than curl | sh. Not from a security standpoint (I think that's nonsense) but I just don't like polluting my system with random stuff that is not managed by a package manager.
Edit: Yes, there is an "other installation methods" link, but the text makes it sound like it is only applicable for Windows.
Um, has there been some instance where rustup broke a desktop? And I'm assuming Debian has actually delivered on this worst case scenario?
Of course, it's Debian; stable is alllll the way back on 1.63, state of the art in 2022.
I meant I bet Debian has broke desktops with a simple `apt update`. Whereas show me where rustup has broken a desktop?
The lack is a consequence of the type of language rust developers chose to be. One that is constantly, rapidly (over just a few months) changing itself in forwards incompatible ways. Other languages don't really have this problem. Even c++ you only have breaking changes every 3-4 years which can be handled by repos. But 3 months old rustc in $distro repos is already fairly useless. Not because rust is a bad language, but because the types of people that write in rust are all bleeding edge early adopters and always use $latest when writing. In another decade or so when the rust developer demographics even out a bit it will probably be okay.
No. The misunderstanding is that you decided that I was talking about how people choose to install rustup, while I didn't even mention it. My reply was entirely about how the entire rust toolchain doesn't have to be in the distro repo. Here's the part in the original comment that I was referring to as a misunderstanding:
> to install the rust development toolchain (because it changes too rapidly to effectively be in any repos).
> This is true, and trivial, and completely irrelevant to what the rust website recommends and what most people do.
Irrelevant to you perhaps. But it's a relevant detail if you're an individual user/developer who cares about security. It's easy to entirely skip the curl bash pattern for rust if you care enough.
The question of why the website recommendeds it is moot because they wrote the script and vetted it among themselves. They have no reason to mistrust it. Meanwhile, the security culture of the user is not really their concern. It's not unreasonable for them to expect you to read a bash script before you download it from the net and execute it. I did, and that's how I realized that there are alternatives.
If you think that it's unreasonable, look at how many projects, including programming languages recommend the same. The prevailing sentiment among the devs is clear - "Here's a script to do it easily. We haven't put anything harmful in it. But we assume that that's not enough guarantee for you. So just check the script first. It's just non obfuscated bash". I almost always find ways to avoid the curl bash step whenever a project recommends it.
> Is there a single distro that does do this now?
Enjoy the following articles in the Arch Linux, Gentoo and Debian wikis discussing the exact topic. Not only do they have rustup packaged in their repo, rustup even has build configurations to make it behave nicely with the rest of system in such a scenario (like following the FHS and disabling self updates).
[1] https://wiki.archlinux.org/title/Rust#Arch_Linux_package
The Parent poster is arguing that the "recommended" way documented on the Rust website to install rustup is using curl bash, and you're saying "it's possible to do things manually".
How is that helpful to the vast majority of the people on Mac/Linux trying to install Rust from scratch and reading the instructions on the website?
This part:
> ... to install the rust development toolchain (because it changes too rapidly to effectively be in any repos)
Rust toolchain is installed using rustup, not curl bash. It's rustup that's installed using curl bash. And while the site does recommend it, installing rustup alone securely is far easier than the entire toolchain.
> How is that helpful to the vast majority of the people on Mac/Linux trying to install Rust from scratch and reading the instructions on the website?
If you're concerned about running a remote script, just check how much work the script actually does. If it's not much, it may be worth exploring the alternative ways for it. For example, the rustup package in Arch Linux [1] does the same thing as what you get from curl bash.
I have mise installed - another package which recommends installation using curl bash. But I don't use it, because it's really easy to install it manually. And when some other tool recommends curl bash, I check if it's supported by mise. As it turns out, rustup can be installed using mise [2].
[1] https://wiki.archlinux.org/title/Rust#Arch_Linux_package
No, your security model is flawed. curl-to-bash is equivalent to running arbitrary code on your device. If the Rust developers wanted to root you, they could easily just put the backdoor into the compiler binary that you are asking to receive from them.
Rust provides a uniform way to install on any Unix you say? Compared to polyglot boarding house which is Linux package management?
> because it changes too rapidly to effectively be in any repos
rustup is also installable via your package manager, but, if it isn't, that's kinda your own distro's problem. The problem is that Linux is non-uniform. The problem is not Rust for providing a uniform, lowest common denominator, method for Unix. Notice Windows doesn't have the same problem.
See: https://rust-lang.github.io/rustup/installation/other.html
> It's crazy that a language that prioritizes security so highly in it's design itself is only compiled through such insecure methods.
Compiled?
Please explain the material security differences between the use of rustup.rs method vs. using a package manager.
I'll wait.
[1] https://web.archive.org/web/20250622061208/http://idontplayd...
you could also use the script to fingerprint and beacon to check if the target is worth it and what you might want to inject into said binary if thats your pick.
still i think i agree, if you gonna trust a binary from that server or a scripts its potato potato...
check what you run before you run it with whatever tools or skills u got and hope for the best.
if you go deepest into this rabbithole, you cant trust your hard disk or network card etc. so its then at some point just impossible to do anyhting. microcode patches, malicious firmwares, whatever.
for pragmatic reasons line needs to be drawn. if your paranoid good luck and dont learn too much about cybersecurity, or you will need to build your own computer :p
Its zero. Zero people. Nobody is competent enough to download and review a bash script and also not recognise this obvious scam.
They probably threw the pipe detection in just because they could (and because it's talked about so frequently).
I don't see it in arch's aur though. That would be my preferred install method. Maybe I'd take a look at it later if it's really not available there.
I get containers aren't perfect isolation, but...
It is Linux-only, though.
I'm not sure why you think pointing out the risk of regular unaudited unverifiable downloads is reductive as you haven't provided any supporting arguments, only snark. You seem like a cunt.
> throwing away the code that is run seems uniquely silly.
Neither traditional downloads and curl | bash are commonly stored long term for analysis.
I at least skim all the scripts I download this way before I run them. There's just all kinds of reasons to, ranging all the way from the "is this malicious" to "does this have options they're not telling me about that I want to use".
A particular example is that I really want to know if you're setting up something that integrates with my distro's package manager or just yolo'ing it somewhere into my user's file system, and if so, where.
(Nix isn’t the solution for OP’s problems though – Nix packages are unsigned, so it’s it’s basically backdoor-as-a-service.)
It created users and groups on my system! And the uninstall script didn't clean it up.
Lovely sentiment, not applicable when you actually work on something. You read your compiler/linker, your OS, and all libraries you use? Your windowing system? Your web browser? The myriad utilities you need to get your stuff done? And of course, you've read "Reflections on trusting trust" and disassembled the full output of whatever you compile?
The answer is "you haven't", because most of those are too complex for a single person to actually read and fully comprehend.
So the question becomes, how do you extend trust. What makes a shell script untrustworthy, but the executable you or the script install trustworthy?
This is the "Unofficial Way".
These scripts are often written by people who only know one OS well (if any), and if that OS is macOS, and you're on Linux (or FreeBSD, or whatever), you can expect them to do weird shit like sticking binaries into /usr/bin in circumvention of the package manager, or adding their own package repositories without asking you (and often not whitelisting just their packages, which allows them to e.g. replace glibc on your system without you noticing), etc.
It's not comparable to simply using the already installed software.
Supply-chain attacks. Linux distros have a long history of being more hardened targets than "a static file on some much, much, much smaller project's random server".
Also things like linux packages or snaps or flatpaks are generally somewhat ringfenced by their nature. Here I don't mean for security reasons per se, but just by their nature, I have confidence a flatpak isn't going to start scribbling all over my user directory. A script may make any number of assumptions about what it is OK to do, where things can go, where to put them, what it can install, etc.
"Trust" isn't just about whether something is going to steal my cryptowallet or install a keylogger. It's about whether it breaks my reproducible ops setup, or will stick configuration in the seventeenth place in my system, or assumes other false things about how I want it set up that may cause other problems.
I'll take a script that passes `shellcheck ./script.sh` (or, any other static analysis) first. I don't like fixing other people's bugs in their installation scripts.
After that, it's an extra cherry on top to have everything configurable. Things that aren't configurable go into a container and I can configure as needed from there.
Given the jungle that is the Linux ecosystem, that bash script is doing an awful lot of compatibility verification and alternatives selection to stand up the tool on your machine. And if what you mean is "I'd rather they hand me the binary blob and I just hook it up based on a manifest they also provided..." Most people do not want to do that level of configuration, not when there are two OS ecosystems out there that Just Work. They understandably want their Linux distro to Just Work too.
(1) feasible traditionally. Projects like snap and flatpak take a page from the success Docker has had and bundle the executable with its dependencies so it no longer has to worry about what special snowflake your "home" distro is, it's carrying all the audio / system / whatever dependencies it relies upon with it. Mostly. And at the cost of having all these redundant tech stacks resident on disk and in memory and only consolidateable if two packages are children of the same parent image.
I call it the worst because it doesn't support installing specific versions of libraries, doesn't support downgrading, etc. It's basically hostile and forces you to constantly upgrade everything, which invariably leads to breaking a dependency and wasting time fixing that.
These days I mostly use devbox / nix at the global level and mise (asdf compatible) at the project level.
I don't like homebrew because I've been burnt multiple times because it often auto-updates when you least want it to and breaks project dependencies.
And there's no way to downgrade to a specific version. Real package managers typically support versioning.
There may be good or bad reasons why Homebrew can't use the standard /Applications pattern, but did they have to go with "curl | bash"?
That's one of many options, documented at the first text link of the home page. https://docs.brew.sh/Installation
FYI, mas is the equivalent of a package manager for macOS apps (a.k.a. a CLI for App Store). https://github.com/mas-cli/mas
Other than brew, I use mise for everything I can. https://mise.jdx.dev/
For command line apps, the equivalent would probably be statically-compiled binaries you can just drop somewhere in your PATH, e.g. /usr/local/bin/. For programs that are actually built this way (which I would personally call "the correct way") this works great!
If nothing else, consider that the limitations of a statically linked binary match those of a traditional Mac application bundle. While Mac apps are usually dynamically linked, they also include all of their dependencies within the app bundle. I suppose you could argue it's technically possible to open an app bundle and replace one of the dylibs, but this is clearly not an intended use case; if nothing else, you're going to break the code signature.
If the app is not actively maintained, unless trivial, it likely has unpatched vulnerabilities of its own anyway.
And on macOS, if the app is not actively maintained, it usually breaks after a couple major releases regardless of anything else, because Apple doesn't believe in backwards compatibility.
This frustrates me to no end on macOS. Not only do you see crappy installers like you said, but a ton of applications now aren't even self contained in ~/Applications like they should be.
Apps routinely shit all over ~/Library when they don't need to, and don't clean up after themselves so just deleting the bundle, while technically 'uninstalls' it, you still have stuff left over, and it can eat up disk space fast. Same crap that Windows installers do, where they'll gladly spread the app all over your file system and registry but the uninstaller doesn't actually keep track of what went where so it'll routinely miss stuff. At least Windows as a built-in disk clean up tool that can recognize some of this for you, macOS will just happily let apps abuse your file system until you have to go digging.
Package managers on Linux solved this problem many, many years ago and yet we've all collectively decided to just jump on the curl | bash train and toss the solution to the curb because...reasons?
I wish more applications were distributed by the Mac App Store, because I believe App Store distributed apps are more strongly sandboxed and may not allow developers to abuse your system like this.
As I mentioned somewhere side-thread: Debian Unstable is only three minor versions behind the version of Rust that the Rust team is publishing as their public release, but Debian Stable is three years old. For some projects, that dinosaur-times speed. If I want to run Debian Stable for everything except Rust, I'm curl-bashing it.
> they're not worrying about what audio subsystem you installed
Some software solves this by autodetecting an appropriate backend, but also, if you use alsa, modern audio systems will intercept that automatically.
> what side of the systemd turf war your distro landed on
Most software shouldn't need to care, but to the extent it does, these days there's systemd and there's "too idiosyncratic to support and unlikely to be a customer". Every major distro picked the former.
> or which of three (four? five?) popular desktop environments you installed
Again, most software shouldn't care. And `curl|bash` doesn't make this any easier.
> or whether your `/dev` directory is fully-populated
You can generally assume the devices you need exist, unless you're loading custom modules, in which case it's the job of your modules to provide the requisite metadata so that this works automatically.
How many times will a novice user follow that pattern until some jerk on discord drops a curl|bash and gets hits
IRC used to be a battlefield for these kinds of tricks and we have legit projects like homebrew training users it’s normal to raw dog arbitrary code direcly into your environment
"Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.
But yeah, the problem around curl | bash isn't the delivery method itself, it's the unsafe user behavior that generally comes along with it. It's the *nix equivalent of downloading an untrusted .exe from the net and running it, and there's no technical solution for educating users to be safe.
Safer behavior IMO would be to continue to encourage the use of immutable distros (Fedora silverbue and others). RO /, user apps (mostly) sandboxed, and if you do need to run anything untrusted, it happens inside a distrobox container.
Almost every one contains preinst or postinst scripts that are run as root, and yet I can count on zero hands the number of times I've opened one up first to see what it was actually doing.
At least a curlbash that doesn't prompt me for my password is running as an unprivileged user! /shrug
Flatpak is a nice suggestion but unfortunately it doesn't seem to work nicely for CLIs.
> "Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.
Isn't that the same thing with the extra step of downloading a git repo?
Keep in mind that its possible to detect when someone is doing curl | bash and only send the malicious code when curl is being piped, to make it very hard to detect.
and then inspect foo.sh and then (maybe) cat foo.sh | bash
Does that avoid the issue?
The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.
When you look at this from an attackers perspective, it’s heaven.
My mom recently got fooled by a scammer that convinced her to install remote access software. This curl pattern is the exact same vector, and it’s nuts to see it become commonplace
But I bet she didn't install it with curl piped to bash. The point isn't that curl|bash is safe, but that it isn't inherently more dangerous than downloading and running a program.
No, I'm asking what is a safer method when I want to install some code from the internet.
> The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.
Not everything is packaged or up-to-date in the OS
> My mom recently got fooled by a scammer that convinced her to install remote access software.
Remote access software are packaged in distros too.
Then why don't Linux distributions encourage safe behaviour? Why do you still need sudo permissions to install anything on most Linux systems?
> How many times will a novice user follow that pattern until some jerk on discord
I'm not a novice user and I will use this pattern because it's frankly easier and faster, especially when the current distro doesn't have some combination of things installed, or doesn't have certain packages, or...
You don't with Flatpak or rootless containers, that's partially why they're being pushed so much.
They don't rely on setuid for it either
Or download & compile & install to a PREFIX (e.g. ~/.local/pkg/), and use a symlink-manager to install to e.g. ~/local (and set MANPATH accordingly, too). Make sure PATH contains ~/.local/bin, etc. It does not work with Electron apps though. I do "alias foo="cd ... && ./foo".
Not guix :)
One of the coolest things about it.
When I see a package from a repo, I have some level of trust. Same with a single binary from GitHub.
When I see a curl|bash I open it up and look at it. Who knows what the heck is doing. It does not save me any time and in fact is a huge waste of time to wade through random shell scripts which follow a dozen different conventions because shell is ugly.
Yes you could argue an OS package runs scripts too that are even harder to audit but those are versioned and signed and repos have maintainers and all kinds of things that some random http GET will never support.
You don’t care? Cool. Doesn’t mean it’s good or safe or even convenient for me.
It's worse than that. If your distro doesn't have some package, you're encouraged to just add PPA repos and blindly trust those.
Quite a few companies run their own repos as well, and adding their packages is again `sudo add repo; sudo install`
Yes, it's not as egregious as just `curl | bash`, but it's not as far removed from it as you think.
There are and there has been distros that install per user, but at some level something needs to manage the hardware and interfaces to it.
Am I? How am I affecting other users by installing something for myself?
Even Windows has had "Install just for this user or all users?" for decades
Operating systems already have standard ways of distributing software to end users. Use it! Sure maybe it takes you a little extra time to do a one off task of adding the ability to build Debian packages, RPM, etc. but at least your software will coexist nicely with everything else. Or if your software is such a prima-donna that it needs its own OS image, package it in a Docker container. But really, just stop trying to reinvent the wheel (literally).
curl https://example.com/install.sh|yy030
or curl https://example.com/install.sh > install.sh
yy030 < install.sh
Another filter, "yy073", turns a list of URLs into a simple web page. For example, curl https://example.com/install.sh|yy030|yy073 > 1.htm
I can then open 1.htm in an HTML reader and select any file for download or processing by any program according to any file associations I choose, somewhat like "urlview".I do not use "fzf" or anything like that. yy030 and yy073 are small static binaries under 50k that compile in about 1 second.
I also have a tiny script that downloads a URL received on stdin. For example, to download the third URL from install.sh to 1.tgz
yy030 < install.sh|sed -n 3p|ftp0 1.tgz
"ftp" means the client is tnftp"0" means stdin
curl -sSL https://example.com/install.sh | vipe | sh
This will open the output of the curl command in your editor and let you review and modify it before passing it on to the shell.
If it seems shady, clear the text.vet looks safer. (Edit: It has the diff feature and defaults to not running the script. However, it also doesn't display a new script for review by default.) The advantage of vipe is that you probably have moreutils available in your system's package repositories or already installed.
Why not just use the tools separately instead of bringing a third tool for this.
Curl -o script.sh
Cat script.sh
Bash script.sh
What a concept
This assumes that securing `curl | sh` separately from the binaries and packages the script downloads makes sense. I think it does. Theoretically, someone can compromise your site http://example.com with the installation script https://example.com/install.sh but not your binary downloads on GitHub. Reviewing the script lets the user notice that, for example, the download is not coming from the project's GitHub organization.
Vet, vite, etc are kind of like kitchen single-taskers like avocado slicer-scoopers. Surely some people get great value out of them but a table-knife works just fine for me and useful in many task flows.
I'd get more value out of a cross-platform copy-paster so I'm not skip-stepping in my mind between pbpaste and xclip.
https://linux.die.net/man/1/xclip
if ! which pbcopy &> /dev/null; then
alias pbcopy="xclip -selection clipboard"
alias pbpaste="xclip -o -selection clipboard"
fi
The `if` bit is so it only adds the alias if there isn't a `pbcopy`, so I can use the same dotfile on mac and linuxLet's consider rust: https://www.rust-lang.org/tools/install
Specifically, consider these two files:
A. a shell script, written by the Rust core developers, hosted on the Rust official website behind TLS
B. a compiler binary, written by the Rust core developers, hosted on the Rust official website behind TLS
Why is everyone so afraid of A, but not afraid of B?
Like, yeah I get it - it's frustrating when xyz software you want isn't in the repos, but (assuming it's open source) you're also welcome to package it up for your distro yourself. We already learned lessons from Windows where installers and "uninstallers" don't respect you or your filesystem. Package managers solved this problem many, many years ago.
meanwhile, everyone everywhere is npm installing and docker running without second thoughts.
How well did that work out?
Classic old school antivirus? Not great, but did catch some things.
Modern EDR systems? They work extremely well when properly set up and configured across a fleet of devices as it's looking for behavior and patterns instead of just going off of known malware signatures.
... because it's the only thing that somewhat works. From my personal experience, the heuristic and "AI-based" approaches lead to so many false positives, it's not even worth pursuing them.
The best AV remains and will always be common sense.
They’re also not exactly risk free - [0]
[0] https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_...
Well... sometimes like, say, yesterday [1], there's a second thought...
[1] https://www.bleepingcomputer.com/news/security/npm-package-is-with-28m-weekly-downloads-infected-devs-with-malware/
Windows Store installs, so about 75% of installs, install sandboxed and no longer need escalation.
The remaining privileged installs that prompt with UAC modal are guarded by MS Defender for malicious patterns.
Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.
They didn't say anything about sudo, so assuming global filesystem/memory/device/etc access is not really a fair comparison. Many installers that come as bash scripts don't require root. There are definitely times I examine installer scripts before running them, and sudo is a pretty big determining factor in how much examination an installer will get from me (other factors include the reputation of the project, past personal experience with it, whether I'm running it in a vm or container already, how I feel on the day, etc).
A bash script is only guarded by file system permissions. All the sensitive content in the home directory is vulnerable. And running sudo embedded would mostly succeed.
I have always used Debian / Ubuntu because I started with my server using them and have wanted to keep the same tech stack between server and desktop - and they have a large repository ecosystem. But the fragmentation and often times byzantine layout is really starting to grind my gears. I sometimes find a couple overlapping packages installed, and it requires research and testing to figure out which one has priority (mainly networking...).
Certainly, there is no perfect answer. I am just trying to discover which ones come close.
1. It symlinks redundant bin and lib directories to /usr/bin, and its packages don't install anything to /usr/local.
2. You can keep most config files in /etc or $XDG_CONFIG_HOME. Occasionally software doesn't follow the standards, but that's hardly the distro's fault.
3. Arch is bleeding edge
4. Arch repos are pretty big, plus thete's the AUR, plus packaging software yourself from source or binaries is practically trivial.
5. Security is not the highest priority over usability. You can configure SELinux and the like, but they're not on by default. See https://wiki.archlinux.org/title/Security.
6. There are few defaults to adhear to on Arch. Users are expected to customize.
Configuration of system-wide things is done in the nix langauge in one place.
It also has the most packages of any distro.
And I found packaging to be more approachable than other distros, so if something isn't packaged you can do it properly rather than just curl|bash-ing.
https://github.com/jwilk/unfaithful-less
It seems at least some versions of bat have the same problem, but I didn't look into the details.
Happy to hear other people's thoughts!
Switched to nix + home-manager as a package manager to replace defacto package managers on some operating systems (ie, darwin uses macports or homebrew).
In cases where the package isn’t available in nixpkgs, can create my own derivation and build it from source.
If I am super paranoid, spin up sandboxed vm with minimal nixos. Use nixos-anywhere to setup vm. Install/build suspicious package. Then do reconnaissance. Nuke the vm after I am done.
Nix, like any other software, isn’t fool proof. Something is likely to get through. In that case, identify the malicious package in nix store. Update your flake to remove/patch software that introduced it. Then nuke the system completely.
Then rebuild system using your declarative system configuration without malicious software.
Is nix for everyone? God no, there’s a massive learning curve. But I will say that once you get past this learning curve, you will never need to install anything with this pattern.
I think getting an (optional?) AI heads-up before reviewing it myself would be great for cURL shell scripts as well. I'm prone to not seeing dark patterns in editor, and tools like vet could as well be tricked into not seeing the dark pattern, malicious intent, or just hazardous code lurking.
packages are installed as root/admin with elevated privilege
packages are run as ordinarly lusers
this is why curl|bash is a more dangerous thing to do.
traditionally, the people with the root password were experienced and trained for this type of analysis, but with personal machines this line of defense does not exist
yes, there are scripts also built into package installers. now you can understand why there shouldn't be, or at least the post-install script can be inspected (this is a major benefit of scripts)
all the noise you want to make about how different distros make the problem harder is part of the problem if your solution is to capitulate to practices which are unsafe-by-design
Compare this to Windows where there is a standard API, standard set of pre-installed fonts and libraries, you download an exe and it "just works". However Windows has no sandbox and is not secure at all if you install any third-party apps. There are antiviruses, but they cannot guarantee detection of malicious code, and do not even try to block less malicious code like tracking and telemetry.
One might notice that there are Snap/Flatpak, however the issue with them is that the applications there mostly not work properly and have lot of bugs. Also they do not sandbox properly, for example, do not prevent reading serial numbers of equipment.
But it the conventional wisdom is that it's better not to shoot up at all.
Everyone here is sort of caught up in this weird middle ground, where you're expecting an environment that is both safe and experimental -- but the two dominant Oses do EVERYTHING THEY CAN to kill the latter, which, funny enough, can also make the former worse.
Do not forget, for years you have been in a world in which Apple and Microsoft do not want you to have any real power.
----
0.001% chance. But ok.
For Cmake there is a similar tool I believe, Cpack.
Is there any such tool for shell script installers?
$ curl 'https://who.knows.man/installer.sh' >/tmp/install
$ vim /tmp/install
[... time passes ...]
$ sh /tmp/install
In reality, system packaging and configuration management tend to be the preferred way outs at scale rather than creating system entropy of ("here, run this script").
Btw, there is a tool on debian I abuse to replace system dependencies and package things (in lieu of checkinstall) called equivs. And, to find changes, I use cruft-ng which depends upon plocate.
And then, some equivalent for actually running whatever was installed. This would need to introspect what the installation script did and expose new binaries, which of course run inside the sandbox when invoked.
To move past the "| bash" lazy default, people need an easy to remember command. The complexity of the UI of these tools hinders adoption.
Suffice to say, it's best to avoid any of this, and do it using the package manager, or manually. I only run scripts like this on systems that I otherwise don't care about, or in throwaway containers.
1. Have backups. You are running software all the time that can corrupt your files either maliciously or, more likely, accidentally. It doesn't really matter where it comes from,
2. Get into the habit of running things in sandboxes. You don't need anything magical here, a separate (unprivileged) user account is a good enough sandbox for many things. I outline an approach for installing Calibre like this on my blog[0] (the official site uses the `curl ... | sudo sh` pattern!)
You could do more clever things like using bwrap[1] to isolate things, or use a distro designed for this kind of thing. Be aware if using a separate user account that your home directory might still be readable so if you're worried about privacy check that, or use bwrap so it's not exposed at all.
chromehearts•1d ago
A good read