https://news.ycombinator.com/item?id=38987109
#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"
But not on other platforms. They are the same but run Linux in a VM.
In all seriousness, Docker as a requirement for end-users to create an executable seems like a 'shift-right' approach to deployment effort, as in, instead of doing the work to make a usable standalone executable, a bunch of requirements for users are just pushed on to them. In some cases your users might be technical, but even then Docker only seems to makes sense when its kept inside an environment where the assumption of a container runtime is there.
I assume extra steps are needed to allow the 'executable' to access filesystem resources, making it sandboxed but not in a way that's helpful for end users?
Other languages like Golang making it relatively easy to build _native_ programs and to cross-compile them makes it a solid choice CLI tools, and I was genuinely hoping that more tooling like that was coming to other ecosystems. Perhaps naive to expect a shift like that for a language that's always been interpreted, but I like when I can run developer tools as native programs instead of ending up with various versions of a runtime installed (npx doesn't _solve_ this problem, merely works around it).
You can have a full and extensive api backend in golang, having a total image size of 5-6MB.
What is killing me at the moment is deploying Docker based AI applications.
The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.
Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.
I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.
Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.
So CUDA gets packaged up in the container twice unless I start building everything from source or messing about with RPATHs!
Heck, you can even cross compile go code for any architecture to another one (even for different OSes), and docker would be useless there unless docker has mechanisms to bind qemu-$ARCH with containers and binfmt.
Are you running on bare servers? Sure, a Go binary and a script is fine.
I understand what the OP is saying but not sure they get this context.
If I were working in that world still I might have that single binary, and a script, but I'm old school and would probably make an RPM package and add a systemd unit file and some log rotate configs too!
So people are building docker "binaries", that depend on docker installed on the host, to run a container inside a container on the host– or even better, on-linux host, all of that runs then in a VM on the host... just... to run a golang backend which is already compiled to a binary?
Except it requires people to install Docker.
I.e. download this linux/mac/windows application to your windows/linux/mac computer.
Double-click to run.
Seems like all bits and pieces are already there, just need to put them together.
this works for actual compiled code. no vm, no runtime, no interpreter, no container. native compiled machine code. just download and double-click, no matter which OS you use.
To achieve that you'll need some kind of compatibility layer. Perhaps something like wine? Or WSL? Or a VM?
Then you'll have what we already have with JVM and similar
What do you mean, "requires Windows 11"? What is even "glibc" and why do I need a different version on this Linux machine? How do I tell that the M4 needs an "arm64", why not a leg64 and how is this not amd64?
In other words, it's very simple in theory - but the actual landscape is far, FAR more fragmented than a mere "that's a windows/linux/mac box, here's a windows/linux/mac executable, DONE"
(And that's for an application without a GUI.)
With dependency management systems, docker, package managers.
MacOS and Windows is closed source and that is of course a problem, I guess the first demo would be universally runnable linux executable on Windows.
Wired: docker2exe.
Inspired: AppImage.
(I'll show myself out.)
Epskampie•16h ago
alumic•16h ago
Pack it in, guys. No magic today.
stingraycharles•7h ago
harha_•16h ago
jve•16h ago
I haven't tried this stuff, but maybe this is something in that direction.
matsemann•16h ago
To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.
99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.
rzzzt•15h ago
endofreach•1h ago
Tip: you could also forward your ssh agent. I remember it was a bit of a pain in the ass on macos and a windows WSL2 setup, but likely worth it for your setup.
lelanthran•14h ago
I don't understand this requirement/specification; presumably this use-case will not be satisfied by a shell script, but I don't see how.
What are you wanting from this use-case that can't be done with a shell script?
lazide•11h ago
lelanthran•10h ago
How's "packing" cli commands into a shell script any different from "packing" CLI commands into a container?
lazide•6h ago
People generally don’t put stuff that works in whatever environment you’re in on the CLI already into contains. Stuff that doesn’t, of course they do.
Having a convenient shell script wrapper to make that not a pain in the ass, while letting all the environment management stuff still work correctly in a container is convenient.
Writing said wrapper each time, however is a pain in the ass.
Generating one, makes it not such a pain in the ass to use.
So then you get convenient CLI usage of something that needs a container to not be a pain in the ass to install/use.
james_marks•6h ago
cmeacham98•4h ago
johncs•14h ago
Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.
j45•11h ago
worldsayshi•10h ago
throwanem•10h ago
Probably takes a couple minutes, maybe less if you've got a good fast distro mirror nearby. More if you're trying to explain it to a biologist - love those folks, they do great work, incredible parties, not always at home in the digital domain.
Hamuko•16h ago
drawfloat•15h ago
regularfry•12h ago
mrbluecoat•7h ago
arjvik•12h ago
[1]: https://github.com/NilsIrl/dockerc
hnuser123456•6h ago
dheera•5h ago
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
rcfox•5h ago
dheera•4h ago
Can we please go back to the days of sudo dpkg -i foo.deb and then just /usr/bin/foo ?
vinceguidry•4h ago
remram•40m ago
vinceguidry•4h ago
ecnahc515•3h ago
ugh123•2h ago
dowager_dan99•1h ago
Hamuko•36m ago
remram•39m ago