One extra feature could be passing the contents of the shell script to an LLM and asking it to surface any security concerns.
I'm not saying that random running random installers from the internet is a great pattern. Something like installing from your distribution can have better verification mechanisms. But this seems to add very little confidence.
The goal is to prevent the installer from being maliciously modified to, for example, skip its own checksum verification or download a binary from a different, malicious URL.
It's one strong link in the chain, but you're right that it's not the whole chain.
I made a gist demonstrating a SQLite schema and using it via direct user input: https://gist.github.com/stephanGarland/5ee5281dedc3abcbc57fa...
By installing it through a well-audited, cryptocraphically-signed and community-maintained package list with a solid security history. What?
The bug here isn't that "it's hard to make downloading scripts secure!", it's that people on macs (and a few other communities, but really it's just OS X culture at fault here) insist on developing software with outrageous hackery like this and refuse to demand better from their platform.
Fix that. Don't pretend that linting (!!) shell scripts pulled off the open internet is going to do anything.
While there are surely exceptions, that nonsense about "just run this unauthenticated script URL" is something unique the the Mac experience. And it's horrifying.
Wait, so is it unique, or are there exceptions?... You can't really have it both ways, right? The more I think about this, it's a silly argument with no real evidence supporting it and I'm curious as to how you even thought of it.
> Most non-Apple rust users get it via a Linux distro's package manager, or by building from source.
Really? That's not what the official Rust documentation says to do. It says to curl-bash-pipe: https://doc.rust-lang.org/cargo/getting-started/installation... So how do you know Linux users are not doing this?
This guy made a list (which is now four years old) of projects that do this: https://kubikpixel.github.io/pipeinstall/ Not a single one is Mac only, all Linux or cross-platform. I'm sure it is woefully incomplete.
Here's another list: https://github.com/nightwatchcybersecurity/dont_curl_and_bas... I believe Homebrew is the only Mac-specific software on the list, otherwise it's all Linux or cross-platform.
Yet another list posted to HN in 2016, nearly all Linux software, including some GNU projects: https://gnu.moe/wallofshame.md (Though there are some entries here that were already in the other two)
The more I think about it, it's bizarre and kind of funny. There's so many real things you can hate on Apple (fans) for, why choose to make up stuff about their nefarious curl-bash practices?
The two biggest hurdles for a security tool like this are LLM non-determinism and the major privacy risk of sending code to a third-party API.
This is exactly why vet relies on ShellCheck—it's deterministic, rules-based, and runs completely offline. It will always give the same, trustworthy output for the same input.
But your vision of smarter analysis is absolutely the right direction to be thinking. I'm excited for a future where fast, local AI models can make that a reality for vet. Great food for thought!
Does it open pager or editor? How does it show the shellcheck issues.
To answer your questions directly in the meantime:
- Pager or Editor? It opens a pager (less by default, but it will automatically use the much nicer bat if you have it installed for syntax highlighting). It doesn't open an editor to prevent any accidental modifications.
- ShellCheck Issues: If shellcheck finds issues, it prints its standard, colorful output directly to your terminal before you review the script. It then pauses and asks you if you want to proceed with the review despite the warnings, like this:
==> Running ShellCheck analysis...
In /tmp/tmp.XXXXXX line 7: echo "Processing file: $filename" ^-- SC2086: Double quote to prevent globbing and word splitting.
==> WARNING: ShellCheck found potential issues. [?] Continue with review despite issues? [y/N]
Thanks again for the excellent idea!
A malicious actor could definitely do that. That’s why vet’s model doesn’t rely solely on ShellCheck—it’s just one layer. The key layer here is the diff. Even if the linter is silenced, the diff reveals any new suspicious # shellcheck disable= lines added to trusted scripts. That change alone is a red flag.
Most installers are doing the same basic patterns: checking for dependencies, checking the distro, etc. It’s not hard to figure these out and spot them in different scripts.
For me personally, I try to use a distro/platform specific package if it exists, since hopefully that means at least one human has read through some of the code, and probably installed it. If that’s not available, I do download the script to review before executing it (and not re-downloading it to pipe to a shell). I’m sure I wouldn’t catch everything, but I would probably catch odd embedded curl calls and the like.
As I already said years ago[1], if you want to hide some nefarious stuff then you'd do it in something like autoconf soup, or something like that. The install.sh is just too obvious of a place. And this is exactly what happened in the real-world xz attack. I can guarantee you very few, if any, packagers are auditing all of that. And even if they did: it's just so easy to miss.
# You're blindly trusting the remote script.
curl -sSL https://example.com/install.sh | bash
then curl -sL https://getvet.sh | sh
> Yes, we see the irony! We encourage you to inspect our installer first. That's the whole point of vet. You can read the installer's source code install.sh
Thanks to your feedback, I've just merged a PR to change the recommended installation method in the documentation to the only truly safe one: a two-step "download, then execute the local file" process. This ensures the code a user inspects is the exact same code they run.
I sincerely appreciate you taking the time to share your expertise and hold the project to a higher standard. This is what makes a community great.
Running public scripts is great, but what about running deployment scripts from a private GitHub repo or setup scripts from an internal server?
Based on this, I've opened a new feature request to add authentication support to vet, with a roadmap that includes .netrc support, a VET_TOKEN environment variable, and a future goal of integrating with secret managers like HashiCorp Vault by reading tokens from stdin.
If you're interested in that direction, I'd love to get your thoughts on the feature request over on GitHub:
https://github.com/vet-run/vet/issues/4
Thanks again for all the great feedback!
a10r•2d ago
The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I'd love to hear your feedback.
The repo is at https://github.com/vet-run/vet
__MatrixMan__•2d ago
I'm a little uncertain about your threat model though. If you've got an SSL-tampering adversary that can serve you a malicious script when you expected the original, don't you think they'd also be sophisticated enough to instead cause the authentic script to subsequently download a malicious payload?
I know that nobody wants to deal with the headaches associated with keeping track of cryptographic hashes for everything you receive over a network (nix is, among other things, a tool for doing this). But I'm afraid it's the only way to actually solve this problem:
1. get remote inputs, check against hashes that were committed to source control
2. make a sandbox that doesn't have internet access
3. do the compute in that sandbox (to ensure it doesn't phone home for a payload which you haven't verified the hash of)
charcircuit•1d ago
Also hashing on inputs is brittle and will break anytime the developer pushes an update. You want to trust their certificate instead.
__MatrixMan__•1d ago
Re: hashes, the whole point is that I want it to break anytime the developer pushes an update, that's my cue to review the update and decide once more whether I want it in my project. The lack of awareness re: what that curl is going to provide is the whole reason people think that `curl | bash` is insecure.
Otherwise there's no commit which indicates the moment we started depending on the new version--nothing to find if we're later driving `git bisect` to figure out when something went wrong. It could supply a malicious payload once, revert back to normal behavior, and you'd have no way to notice.
Also, you end up with developers who have different versions installed based on when they ran the command, there's no association with the codebase. That's a different kind of headache.
geocar•1d ago
Why? What exactly do you think "shellcheck" does? When do you think you're diffing and what do you think you are diffing with?
> and ask for my explicit OK before executing.
But to what end? You're not better informed by what the script does with this strategy.
A small shell script like yours I can read in a minute and decide it does nothing for me, but large installers can be hard to decipher since they are balancing bandwidth costs with compatibility, and a lot of legitimate techniques can make this hard to follow without care and imagination.
> The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I don't understand what philosophy you're talking about.
I think you're doing the exact same thing that malicious attackers do, you're just doing it worse:
I mean your script knows about wget, but your server doesn't. Sad. I also think you should be telling people to pull "https://github.com/vet-run/vet/blob/main/scripts/install.sh" instead of trying to be cute, but that's just me.> I'd love to hear your feedback.
You're getting it: I think your program sucks, but I also like the idea of trying to do something, and I understand you just don't have any idea what to do or what the problem actually is.
So let me teach you a little bash:
This little scriptlet will wait until bash tries to run something, and ask before proceeding. Simples. Put this in front of an installer (or something else messy) and get step-by-step confirmation of what's going on. Something like this is in the BASH manual someplace, or was once upon a time.In a large script this might annoy people, so if it were me, I would have a whitelist of commands that I think are safe, or maybe a "remember" option that updates that script. I might also have a blacklist for things like sudo.
While I'm on the subject of sudo, a nasty trick bad guys use is get you to run sudo on something innocuous and then rely on the cached credentials to run a sneaky (silent) sudo in the same session. Running sudo -k before interacting with an unknown program can help tremendously with this.
a10r•1d ago
First, let me address the bugs you found, because you were 100% right. The wget user-agent issue revealed a significant and regrettable flaw in the server-side logic. Thanks to your report, a fix has already been merged and deployed.
The installer also had a conceptual flaw in its security recommendation, as you and others pointed out. The documentation has been updated to recommend a two-step "download, then execute" process and now includes a direct link to the GitHub release asset for maximum transparency—no more "cute" domain magic as the primary method.
Your trap DEBUG suggestion is a really powerful technique, and it highlights a core philosophical difference in how to approach this problem:
Your approach is an "In-Flight Monitor"—it steps through an executing script and asks for permission at each step. It's fantastic for deep, real-time analysis.
vet's approach is a "Pre-Flight Check"—its goal is to let a human review and approve a complete, static snapshot of a script before a single line of it ever executes.
I chose the "pre-flight" path because diffing and shellcheck are central to the idea. They answer the questions: "I trusted this script last month, but has it changed at all since then?" and "Does this static code contain any obvious red flags?"
The trap DEBUG method is powerful, but it can't answer that "what's changed?" question upfront and runs the risk of "prompt fatigue" on large installers, where a user might just start hitting 'y' to get through it.
You've given me a lot to think about, especially on how to better articulate this philosophy. I sincerely appreciate you taking the time to teach and challenge the project. This is the kind of tough, expert feedback that makes open source better, and you've already had a direct, positive impact on it.
BaudouinVH•1d ago
My 2 cents
subjectsigma•1d ago
a10r•1d ago
Its role in vet isn't to find malware, but to act as an automated code quality check. A script full of shellcheck warnings is a red flag, which helps inform the user's final decision to trust it or not. It's one of several signals that vet provides.
Thanks for the important clarification!