Genuine question. Not familiar with this company or the CLI product.
Tauri means we can reuse a lot of the Rust we already have, easily do the systems stuff we need, and have something light + fast. Elixir has been awesome and makes a realtime sync backend easier
Not currently open source while it's under heavy early development, we will be opening up the desktop app later on
(nb: system web views are very inconsistent, so they're considering adding a Chromium renderer, which will bring everything full circle)
This leaves room for stuff like the Functional Software License.
Software freedoms exist as a concept for a reason, not just a bullet point to get people to click a download link that doesn’t even include source anyway.
I call such projects “open source cosplay”. It’s an outfit you put on for conferences, then take off when back at the office working on the nonfree valuable parts.
The irony of this purist mindset is that it's actually very corporatist, big-tech, and proprietary in its implications. If open source devs are discouraged by the culture from building products and making a living independently, it means that the only people who can devote significant time to open source are employees of established companies (who themselves often sell closed source proprietary products) and people who are wealthy enough to work for free. Is that the world you want?
If you want to record a runbook, then record a runbook. You can do that a million ways. Text file, confluence doc, screen recording, shell script, etc. People already don't do that; they're not gonna suddenly start doing it more because your UI is fancier.
Personally, I don't want to sit around all day writing code (or docs) to try to get the system to be like X state. I want to manually make it have X state, and then run a tool to dump the state, and later re-run the tool to create (or enforce) that state again. I do not want to write code to try to tell the computer how to get to that state. Nor do I want to write "declarative configuration", which is just more code with a different name. I want to do the thing manually, then snapshot it, then replay it. And I want this to work on any system, anywhere, without dependence on monitoring a Bash shell for commands or something. Just dump state and later reapply state.
Like, Terraform has always sucked because there was no way to dump existing resources as new code. So a team at Google made a tool to do it (Terraform-er). If Terraform had already had that feature, and if it didn't rely on having pre-existing state to manage resources, that would be like 90% of the way to what I'd want. Just dump resources as code, then let me re-run the code, and if I want I can modify the code to ask me for inputs or change things. (People think of Terraform as only working on Cloud resources, but you could (for example) make an Ubuntu Linux provider that just configures Ubuntu for you, if you wanted)
> Just dump state and later reapply state
is necessarily declarative.
> Just dump resources as code,
What is the code for this resource?
VM foo1
Memory 16GiB
Network mynet1
It depends on the current state of the system where the resource is applied. If VM foo1 already exists, with 16GiB of memory, and connected to network mynet1, then the code is a no-op, no code at all. Right? Anything else would be a mistake. For example if the code would delete any matching VM and re-create it, that would be disastrous to continuity and availability, clearly a non-starter. Or, if VM foo1 exists, with 16GiB of memory, but connected to anothernet3, then the code should just change the network for that VM from anothernet3 to mynet1, and should definitely not destroy and re-create the VM entirely. And so on.It essentially still is.
Unless the Dockerfiles are kept secret, any container can be replicated from the given Dockerfile. Barring extreme (distro/system/hardware)-level quirks, a Docker container should be able to run anywhere that Linux can.
I imagine with a lot of discipline (no apt update, no “latest” tag, no internet access) you can make a reproducible docker file…. But it is far from normal.
Dockerfiles are basically this, but with a file documenting the different steps you took to get to that state.
- I am on a team that oversees a bunch of stuff, some of which I am very hands-on with and comfortable with, and some of which I am vaguely aware exists, but rarely touch
- X, a member of the latter category, breaks
- Everyone who actually knows about X is on vacation/dead/in a meeting
- Fortunately, there is a document that explains what to do in this situation
- It is somehow both obsolete and wrong, a true miracle of bad info
So that is the problem this is trying to solve.
Having discussed this with the creator some[1], the intent here (as I understand it) is to build something like a cross between Jupyter Notebooks and Ansible Tower: documentation, scripts, and metrics that all live next to each other in a way that makes it easier to know what's wrong, how to fix it, and if the fix worked
[1]Disclosure: I help mod the atuin Discord
How does Atuin solve that problem? It seems to me that inaccurate and obsolete information can be in an Atuin document as easily as in a text document, wiki, etc., but possibly I'm not seeing something?
I believe the intent is that you get bidirectional selective sync between your terminal and the docs, so that if what's in the docs is out of date or wrong, then whatever you did to actually fix things can be synced back to the docs to reduce the friction of keeping the docs updated.
I'm cautiously curious about something like this, although I haven't tried it personally.
Thus “Runbooks That Run.”
Some people just like a particular workflow or tooling flow and build it really. Maybe it works for enough people to have a viable market, maybe not.
I am just using a PHP deployment process for no reason other than feeling like it for personal projects and it handles 60% of the work without me needing to do anything. But any runbooks for it are tasks built into the tool and in the same git repo for the entire server deployment. I'm not gonna put it in some random place or a shell script that I need to remember separate commands for.
Code, for programmers, is inherently self-documenting if you keep a simple functional style without any complexity with comments on the occasional section that isn't just "Create a MySQL user, roll the MySQL user's password, update the related services with the new password/user combination, remove the old user that the fired employee has credentials to on the off chance we failed to block them at the VPN" kind of stuff.
We recently started using https://marimo.io/ as a replacement for Jupyter notebooks, as it has a number of great improvements, and this seems like a movement in a similar direction.
You can have a plaintext file which is also the program which is also the documentation/notebook/website/etc. It's extremely powerful, and is a compelling example of literate programming.
A good take on it here: https://osem.seagl.org/conferences/seagl2019/program/proposa...
That's entirely different to what's being desired by GP.
> > My dream tooling is for every tool to have an terminal interface so that I can create comprehensive megabooks to get all the context that lives in my head. i.e. jira, datadog, github etc, all in one pane.
My perspective on this is essentially having jira/datadog/github/etc be pluggable into the CLI, and where standard bash commands & pipes can be used without major restrictions. (Something akin to Yahoo Pipes)
MCP is highly centered around LLMs analyzing user requests & creating queries to be run on MCP servers. What's being desired here doesn't centralize around LLMs in any sense at all.
We build logic once and make it automatically accessible from any of these consumption methods, in a standardized way to our clients, and I am indeed piping some of these directly in the CLI to jq and others for analysis.
There is a response elsewhere in comments[1] which claims that this is trying to fix the problem of bad documentation, but this has the same fundamental problem. If you a) are responsible for fixing something, b) are unfamiliar with it, and c) the "fixing resources" - whether those are scripts, documentation, or a Runbook/Workflow - you were provided with by the experts are out-of-date; you're SOL and are going to have to get to investigating _anyway_. A runbook and a script are just different points along the spectrum of "how much of this is automated and how much do I have to copy-paste myself?"[2] - both are vulnerable to accuracy-rot.
[0]: https://www.warp.dev/warp-drive
[1]: https://news.ycombinator.com/item?id=43766842
[2]: https://blog.danslimmon.com/2019/07/15/do-nothing-scripting-...
Because everything is a start-up now.
gyrovagueGeist•5h ago
milkshakes•4h ago
this project appears to be intended for operational documentation / living runbooks. it doesn't really seem like the same use case.
shadowgovt•4h ago
As an on-the-fly debugging tool, they're great: you get a REPL that isn't actively painful to use, a history (roughly, since the state is live and every cell is not run every time) of commands run, and visualization at key points in the program to check as you go your assumptions are sound.
rtpg•4h ago
I do think that given the fragile nature of shell scripts people tend to write their operation workflows in a pretty idempotent way, though...
ellieh•3h ago
you can define + declare ordering with dependency specification on the edges of the graph (ie A must run before B, but B can run as often as you'd like within 10 mins of A)
nine_k•3h ago
Another thing is that you'll need branches. As in:
Different branches can be separated visually; one can be collapsed if another is taken.Writing robust runbooks is not that easy. But I love the idea of mixing the explanatory text and various types of commands together.
noodletheworld•27m ago
- in excel
- in a confluence document
- in a text file on your desktop
The use case this addresses is 'adhoc activites must be performed without being totally chaotic'.
Obviously a nice one-click/trigger based CI/CD deployment pipeline is lovely, but uh, this is the real world. There are plenty of cases where that's simply either not possible, or not worth the effort to setup.
I think this is great; if I have one suggestion it would just be integrated logging so there's an immutable shared record of what was actually done as well. I would love to be able to see that Bob started the 'recover user profile because db sync error' runbook but didn't finish running it, and exactly when that happened.
If you think it's a terrible idea, then uh, what's your suggestion?
I'm pretty tired of copy-pasting commands from confluence. I think that's, I dunno, unambiguously terrible, and depressingly common.
One time scripts that are executed in a privileged remote container also works, but at the end of that day, those script tend to be specific and have to be invoked with custom arguments, which, guess what, usually turn up as a sequence of operations in a runbook; query db for user id (copy-paste SQL) -> run script with id (copy paste to terminal) -> query db to check it worked (copy paste SQL) -> trigger notification workflow with user id if it did (login to X and click on button Y), etc.
packetlost•4h ago
mananaysiempre•4h ago
theLiminator•2h ago