[1] https://github.com/fish-shell/fish-shell/releases/tag/4.0.0
https://github.com/fish-shell/fish-shell/tree/c2eaef7273c555...
vs the C++
https://github.com/fish-shell/fish-shell/tree/d9d3557fcfbce1...
Initial motivation: https://github.com/fish-shell/fish-shell/pull/9512#issuecomm...
I switched to the beta on the day it was released and haven't had one single issue with it. That was the smoothest rewrite I've ever seen.
The latest standards for POSIX.2 utilities are here:
https://pubs.opengroup.org/onlinepubs/9799919799/utilities/
I do agree with you that UNIX userland would be miles ahead of where we are now if the POSIX.2 standard could be cajoled out of the '80s.
My preference is PowerShell. It's now open source [1], it has a wide install base, and is cross-platform. It is a bit heavy and slower to start (actually takes seconds), but the cleaness of it's record-based nature versus just string parsing is infinitely refreshing.
On my old i386 server, this is my fastest shell:
$ ll /bin/dash
-rwxr-xr-x 1 root root 85368 Jan 5 2023 /bin/dash
The set of features in the POSIX.2 shell is designed to minimize resource usage.This is simply a place that PowerShell cannot go.
This does not mean that resource-constrained environments do not exist.
Anyway, uh-huh:
https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github... -> https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...
https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github... -> https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...
and, relevant to your comment even they opt out of telemetry https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...
---
As a frame of reference, to build bash one only needs /bin/sh not a pre-built copy of bash itself https://git.savannah.gnu.org/cgit/bash.git/tree/configure?h=...
1. You can do a superset of POSIX, like BASH and I think Zsh. This gives you a graceful upgrade path while maintaining backward compatibility, at the expense of being somewhat "stuck" in places. Oil is another attempt at exploring how best to use this path.
2. You can throw out POSIX totally, like fish and PowerShell. This lets you really improve things, at the expense of breaking backwards compatibility. IMHO, breaking compatibility is painful enough that it's really really hard to justify.
It's also worth pointing out that you can separate the roles of "interactive shell" and "shell for scripts". It is, for example, perfectly reasonable to use fish for interactive sessions while keeping /bin/sh around and perhaps even preferring dash as its implementation, which gives you compatibility with software while making things friendlier to users. I mean, I say this as someone who writes a lot of sh scripts and between that and years of practice my fingers expect something roughly sh-like, but I hear a lot of good things from folks who just switched their interactive shell to ex. fish.
All I know is that ZSH works with 100% of tasks and scripts I need and fish does not. Therefore, I get pissed at fish and it's a bad terminal. Who cares if fish is built on fresh new philosophy and this week's language du jour if it doesn't work?
I'm using the tool that works the way it's supposed to. I don't care if it works because it's using standards from 50 or 500 years ago because that is totally and completely disjoint from being a good tool
Granted, that still is a fair point IMO; backwards compatibility is for users too, not just programs.
But, it also increases the mental workload a bit. For one, you now use two similar-but-not-quite tools, and have to keep them straight to make sure you always use the right syntax.
What really did me in was, most of the snippets, docs, etc on the internet were POSIX-compatible, so I either had to translate to fish (which was less bash-compatible at the time), make a temp script, or drop into bash. All of which were constantly-annoying speed bumps.
One of the things I like about Oils (and why I'm contributing to it), is the bash-compatible part and the future-directions part are the same executable, so toggling the behavior is very fast.
They also agreed with you in the early 1990's. There are some quotes from Richard Stallman, David Korn (author of AT&T ksh), and Tom Duff (author of rc shell) here lamenting Bourne shell:
https://www.oilshell.org/blog/2019/01/18.html#slogans-to-exp...
A problem with using a Bourne shell compatible language is that field splitting and file name generation are done on every command word
nobody really knows what the Bourne shell’s grammar is
---
But there is a "collective action" problem. Shell was the 6th FASTEST growing language on Github in 2022: https://octoverse.github.com/2022/top-programming-languages
I imagine that, in 2025, there are MORE new people learning POSIX shell/bash, than say any other shell here: https://github.com/oils-for-unix/oils/wiki/Alternative-Shell...
Because they want to get work done for the cloud, or embedded systems, or whatever
Also, LLMs are pretty good at writing shell/bash!
---
Oils is designed to solve the legacy problem. OSH is the most bash-compatible shell in the world [1]:
and then you also have an upgrade to YSH, a legacy-free shell, with real data structures: https://oils.pub/ysh.html
YSH solves many legacy problems, including the exact problems from the 1990's pointed out above :-)
So to the extent that you care about moving off of bash for scripting, you should probably prefer OSH and YSH to Brush
It looks like Brush aims for the OSH part (compatible), but there is no YSH part (dropping legacy)
(I may run Brush through our spec tests to see how compatible it is, but looking at number of tests / lines of code, I think it has quite some distance to go.)
[1] e.g. early this year, Koichi Murase rewrote bash arrays in OSH to use a new sparse data structure, which I mentioned in the latest blog post. Koichi is the author of the biggest shell program in the world (ble.sh), and also a bash contributor.
https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell...
I strongly disagree with the notion of only learning one shell language "because what if I telnet into an ancient Sun box and Fish isn't available?" In exactly the same way, I don't exclusively write my programs in C in case some remote host might not have Python or Rust or Fish some day. I'll cross that bridge when I come to it, but in the mean time I want to use something that makes me happy and productive.
But please don't ruin the one great thing about shell scripting, which is that it's still possible to write one shell script that runs everywhere. Yes it's old, antiquated and quirky. It's also very convenient not to have to 1) install new tools on every system, 2) adapt a billion old scripts for a new tool, and 3) learn yet-another-new-paradigm.
I'd love to see more shell exploring things beyond POSIX. Text based stdin/stdout will always have its place, but having ways to express, serialize, and pass along data in more structured ways is quite nice.
What do you mean?
YMMV, I’m mainly speaking from my own experience/history with powershell. For a long time it was the only way I knew of to manage various aspects of low level windows OS settings.
Using Windows is mostly GUI first, and traditional (DOS) command line second. Powershell is mostly used by admins.
Nushell on the other hand takes the (IMO more pragmatic) stance that the underlying app will most likely be writing strings to stdout and it's the shell's job to make it easy to discern the structure in those strings.
Perhaps a powershell wizard can show me that I'm wrong about this, but my feeling is that the powershell equivalent to this nushell is going to either call some external program (in addition to docker) or be quite messy:
$ docker ps | detect columns | where NAME == "foo"
But merely concatenating processes which emit and consume data of known shape is something that the OS can do without an interactive shell. What you really need a shell for is those situations where a bit of duct tape and string is needed to keep the bits flowing... when "--format '{{json .}}'" doesn't work.
Re: ConvertFrom-Csv, touche, but I'm going to guess that it's not prepared to handle cases where the delimiter also appears in the output (spaces in this case).
$ docker ps | detect columns | select CREATED STATUS | get 1 # ConvertFrom-Csv equivalent
╭─────────┬───────╮
│ CREATED │ weeks │
│ STATUS │ 9 │
╰─────────┴───────╯
Nushell's answer here is --guess, which counts characters to determine how far from the left the column starts. $ docker ps | detect columns --guess | select CREATED STATUS | get 1
╭─────────┬─────────────╮
│ CREATED │ 2 weeks ago │
│ STATUS │ Up 9 days │
╰─────────┴─────────────╯
I'm not trying to bash pwsh, It's just that they're just trying to be different kinds of thing.Common worries I hear from people that were non-issues in practice:
- Not POSIX compatible: Nushell doesn't aim to replace POSIX, there's no problem with dropping back to bash to execute some shell snippets
- You need to re-learn everything: I'm not a huge fan of how the commands are organized, but I still didn't find it that difficult. nushell's prompt/readline comes with an inline help menu with fuzzy search. Hit CTRL+O to edit a command/pipeline in your IDE/editor of choice with LSP backed intellisense, type-checking and in-editor command docs/help. The syntax is very simple and intuitive.
- Just use python: Sure, but python comes with a lot of disadvantages. It's slow and uses dynamic typing. Static typing in nushell catch typos in pipelines & scripts before they execute. It also makes in-shell and IDE LSP tab-completions very accurate. Large files process quickly though it will still consume more memory if you aren't able to process all the data in a streaming fasion. It's like having jq but with autocomplete and it works on all command output & shell variables. Though if you really like python, check out Oil/OSH/YSH: https://oils.pub/
- All Unix commands output text, structured data is useless in a shell: `detect columns` (https://www.nushell.sh/commands/docs/detect_columns.html) - now it's structured. Or use `from <format>` if the command outputs CSV, JSON, INI, YAML, etc... Or don't, cause GNU tools work fine in nushell if you keep everything in text format
And there are other crazy features too.
- Write nushell plugins in your language of choice, plugins can work with structured data
- Plugins can run in the background and maintain state, nushell can automatically start a plugin when it is first used and stop the plugin when it is idle
- e.g. a plugin can open a SQL connection and use it across multiple commands. There's a built-in plugin for opening in-mem/on-disk SQLite databases
- Data can carry metadata, e.g. binary data can carry its mime type, strings often carry metadata about which line and file the string was read from.- Closures, generators, ranges, errors/exceptions + try-catch
- Ongoing work on DAP suppprt to allow debugging scripts from your IDE
- Create your own hooks to customize how different types of data are displayed. Display structured data in table/tree form, display binary data in hex, etc...
- Collect related commands/variables into modules. Load a module knowing that you can easily unload the whole module later, module contents don't pollute global state. Variable declarations, env vars and loaded modules are scoped to the current code block and disappear after the closing bracket, lowering the odds of a name collision.
- Native support of Polars dataframes to work with even moar data
- Complex parllelism: Message-passing/actor architecture background jobs. Turn-key parallelism: transform every element of a list in parallel - `par-each` (https://www.nushell.sh/commands/docs/par-each.html)
The biggest downside of nushell is that it hasn't hit 1.0 yet so commands occasionally get renamed. Expect that you may occasionally need to tweak a script to get it working again. Definitely a pain point.
Nowadays `job unfreeze` will do the trick.
But a lot of the structured data transformation use cases I encounter, I find myself tackling in DuckDB. It's a little harder for the simplest things, but it pulls ahead quickly. Or at least it does if you need to remember SQL anyway...
But I must agree that dedicated data analysis tools like DuckDB and jq are more powerful, intuitive, and performant. I guess what makes nushell appealing is how the data is already in nushell. It's where I stash any inputs I plan to use, any output commands produce and also any datasets I'm currently working on.
The true value of nushell is it's role as a data exchange that preserves typing+structure and in providing tools so ingesting structured data is easy and parsing unstructured data is not daunting. I'm less pushing for nushell specifically and more hoping that it encourages more people ro think about some larger questions:
- It's time to question the role of "UTF-8 text" as "the basic fundamental unit of data in the POSIX ecosystem"
- Typed/structured data brings significant value and is not harder to work with
- How can we improve data interchange between tools/apps without causing breakage? There's been quite a bit of thinking on if CLI tools can negotiate with each other to switch to communicating using higher-level data formats. Optimally it should work over common transports like SSH too. Unfortunately, I haven't seen any proposals that don't also introduce new problems. The nushell authors are looking into this as well.
- How can we evolve terminals from the simple, reliable text renderers (that they never were) into simple, reliable renderers of general structured data?
Related work:
- https://arcan-fe.com/ (obligatory mention: https://arcan-fe.com/2021/04/12/introducing-pipeworld/)
- https://github.com/nushell/nana (nushell GUI experiment)
- https://domterm.org/index.html
- https://acko.net/blog/on-termkit/ (defunct)
Other than OSH, it seems to be the only shell that aims for POSIX/bash compatibility, out of dozens of alternative shells: https://github.com/oils-for-unix/oils/wiki/Alternative-Shell...
As far as I know, OSH is the most POSIX- and bash-compatible shell:
Nine Reasons to Use OSH - https://oils.pub/osh.html
If I have time, I will run this through our spec tests: https://oils.pub/release/0.29.0/test/spec.wwz/osh-py/index.h...
---
About this part: There are a number of other POSIX-ish shells implemented in a non-C/C++ implementation language
OSH is implemented in an unusual style -- we wrote an "executable spec" in typed Python, and then the spec is translated to C++.
That speeds it up anywhere from 2x-50x, so it's faster than bash on many workloads
e.g. a "fibonacci" is faster than bash, as a test of the interpreter. And it makes 5% fewer syscalls than bash or dash on Python's configure (although somehow this doesn't translate into wall time, which I want to figure out)
It's also memory safe, e.g. if there is no free() in your code, then there is no double-free, etc.
---
As mentioned on the OSH landing page, YSH is also part of the Oils project, and you can upgrade with
shopt --set ysh:upgrade
If you want JSON and so forth, e.g. ysh$ json read < x.json
ysh$ = _reply
(Dict) {shell: "ysh", fun: true}
YSH aims to be the "ultimate glue language" - https://oils.pub/ysh.htmlHow does that work in practice? Is it an arena allocator per command execution? Fixed size preallocation of space for shell variable names and values?
That was completed in 2023: Pictures of a Working Garbage Collector
https://www.oilshell.org/blog/2023/01/garbage-collector.html
https://news.ycombinator.com/item?id=34350260
There are two reasons to have a GC:
(1) The AST (aka "lossless syntax tree") is actually GRAPH - it is useful to share nodes.
I think graphs are fairly common for ASTs. Once a node can have multiple parents, then ownership becomes less clear, and errors using arenas can easily cause memory safety bugs.
For example, in Rust, I think you start using Rc<T> and so forth, which is automatic memory management at runtime.
(2) YSH is part of Oils, and it has nested dicts and lists like Python and JavaScript.
Once you have nested dicts and lists, you need GC. We actually have the GC at the "meta-level", and that (somewhat surprisingly) saves a ton of code and bugs.
It's like writing a Python/Ruby interpreter in Java or Go, and re-using the platform GC, rather than writing one specific to your language.
In particular, we don't have GC rooting, ownership, or anything like Py_INCREF/DECREF littered all over the codebase. This makes our implementation like an executable spec!
And that makes it very easy to contribute -- you write typed Python, and it's as fast as shells written in C. I think it's the best of both worlds
---
Something I've pointed out over the years, and which many people find illuminating, is
- bash doesn't have nested maps and arrays -- it only has flat ones. Therefore it doesn't need GC
- awk too - it doesn't need GC, because its data structures are limited - https://news.ycombinator.com/item?id=28785732
- make and cmake too.
I put all these "weak glue" languages in the "string-ish" category. In contrast, YSH gains A LOT of power from having real data structures, and that requires GC.
(Writing a GC was the hardest part of the project -- I think sh/awk/make/cmake left it out for a reason! Even nushell has no GC I believe. fish lacks a GC too.)
YSH has more of the power of Python/JS/Ruby, which if you look at it historically, did "replace" shell and awk for a huge set of problems. (Guido van Rossum specifically mentioned the "hole" between shell and C as a motivation for Python.)
Garbage Collection Makes YSH Different - https://www.oilshell.org/blog/2024/09/gc.html
---
On a different note, if anyone really wants a shell buildable with the Rust toolchain, it would be worthwhile to TRANSLATE Oils to Rust. This will definitely work, because Oils is translated to C++ (completed in early 2024).
You would have to write the runtime, which is around 4K lines for the garbage collector, and 4K lines for the OS bindings. That's a lot easier than writing a bash-compatible shell.
That is, Oils has about 8K lines of hand-written C++ code. Compare with bash which is 162K+ lines of C written from scratch -- it's 20x less.
I think 8K lines of unsafe code is also comparable to Rust binaries. e.g. prior to ~2018, Rust binaries used dlmalloc by default, which is 20-30K lines of C code.
(What's important is that almost all PRs modify safe code only -- it is very easy to review typed Python)
This would be a fun exercise for anybody interested in writing a GC in Rust (which is a hard challenge, with many nontrivial choices). You can write a GC, and get a shell for free, etc.
---
Also, Brush and nushell are different projects ... Oils has both things -- OSH and YSH -- compatible and new
So you actually get 2 shells for free by doing that :-P The "executable spec" strategy took a long time, but it actually worked!
(...HN formatting fail, imagine shell output showing the nixpkgs bash binary is 1.1M, the brush binary is 6.9M...)
with no prospect of further amortizing that size through shared libraries. Without shared libraries the only chance I see for rust being used to replace base system tools is with multi-call binaries a la busybox.
Hello world is really large, and it's unamusing how so much of the standard library is creamed into the resulting binary, no matter how trivial...
Do you know the current status of dynamic linking? I guess the lack of ABI stability is the big blocker, right? Probably no use in formalizing the linking bits if the goal posts keep moving. So it seems like the big problem is some committee will never complete the task... Because it will never be perfect... Something like that.
Probably the wrong place to ask, but where is the claim that static compilation is "hurting battery life" coming from? More efficient use of CPU caches because frequently used shared libs are more likely to be cached? Or less allocations in RAM maybe?
So I would run WSL in powershell, and Brush in WSL?
Is it faster, but otherwise identical to bash? And if yes, are there any sorts of benchmarks?
Implementation language: Rust
That strikes me as enough for many use-cases.
But it no longer seems to be in active development.
I love rust, but please, keep free software free. If only because you most likely benefited from past free software.
Ericson2314•2mo ago
We have a decent amount of code in bash that I'd like to get working on Windows too, once Nix on Windows is ready. I'm happy to rewrite it to a better language, but if I can get a non-CygWin/MSYS2 bash-compatible shell, that's a very nice thing to try out.
chasil•2mo ago
https://frippery.org/busybox/index.html
This is actually the Almquist shell with many bashisms brought in (no arrays though).
Edit: The ADA port below might interest you.https://github.com/AdaCore/gsh
https://archive.fosdem.org/2019/schedule/event/ada_shell/
Ericson2314•2mo ago
nikokrock•2mo ago
Ericson2314•2mo ago
chubot•2mo ago
https://news.ycombinator.com/item?id=43910883 (similar comment about Rust)
Bash arrays are extremely hairy, with many corner cases, and differ from bash version to version.
And in particular, the shell code in Nix DOES RELY on these corners (e.g. https://www.oilshell.org/blog/2024/06/release-0.22.0.html)
---
I mentioned in another comment that Koichi Murase, who is a bash contributor, and wrote the largest shell program in the world, just overhauled the bash array support in OSH
A few relevant test files -- Koichi added a huge number recently:
https://oils.pub/release/0.29.0/test/spec.wwz/osh-py/array-s...
https://oils.pub/release/0.29.0/test/spec.wwz/osh-py/array-a...
https://oils.pub/release/0.29.0/test/spec.wwz/osh-py/ble-idi...
Our tests are thorough enough that we ROUTINELY find bugs in bash, like integer overflow bugs.
Koichi also knows about the differences between say bash 4.3, 4.4, 5.0, 5.1, etc. Because he wrote a very large program that uses bash arrays all over the place.
---
Feel free to use our tests in any case (other shells like the Scheme shell used to bootstrap Guix have)
And feel free to post a message to https://github.com/oils-for-unix/oils if you're interested or have questions
Ericson2314•2mo ago
chubot•2mo ago
And I did a bunch of research on it (e.g. https://lobste.rs/s/qjzd9y/everyone_quotes_command_line_argu... )
I did notice that git for windows uses a bash built with MSYS. And I noticed that Python's subprocess module implements pipelines with native Win32, not with MSYS.
So that is something we can do in Oils, in theory
I won't say it's high priority, but of course it's an open source project, and users often change the priorities
Two things that would really help are (1) finding a skilled Win32 programmer and (2) getting another grant (we've gotten 3 in the past)
Ericson2314•2mo ago
nikokrock•2mo ago
chasil•2mo ago
I believe that busybox is produced by the Windows cross-compiler that I have loaded from EPEL. Can gsh be built the same way?
nikokrock•2mo ago
Though I never tried using a cross compiler to compiler GSH should not be an issue
chasil•2mo ago
Alas, no ADA.
"Using this toolchain allows you to build binaries for the following programming languages: C, C++, Objective-C, Objective-C++ and Fortran."
https://fedoraproject.org/wiki/MinGW/Tutorial
I think all of these packages were pulled in the "yum install":
harrison_clarke•2mo ago
you'd probably want the rest of the programs, too. bash isn't too useful on its own. you can also find those on the same website
interroboink•2mo ago