https://news.ycombinator.com/item?id=38987109
#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"
But not on other platforms. They are the same but run Linux in a VM.
In all seriousness, Docker as a requirement for end-users to create an executable seems like a 'shift-right' approach to deployment effort, as in, instead of doing the work to make a usable standalone executable, a bunch of requirements for users are just pushed on to them. In some cases your users might be technical, but even then Docker only seems to makes sense when its kept inside an environment where the assumption of a container runtime is there.
I assume extra steps are needed to allow the 'executable' to access filesystem resources, making it sandboxed but not in a way that's helpful for end users?
Other languages like Golang making it relatively easy to build _native_ programs and to cross-compile them makes it a solid choice CLI tools, and I was genuinely hoping that more tooling like that was coming to other ecosystems. Perhaps naive to expect a shift like that for a language that's always been interpreted, but I like when I can run developer tools as native programs instead of ending up with various versions of a runtime installed (npx doesn't _solve_ this problem, merely works around it).
You can have a full and extensive api backend in golang, having a total image size of 5-6MB.
What is killing me at the moment is deploying Docker based AI applications.
The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.
Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.
I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.
Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.
So CUDA gets packaged up in the container twice unless I start building everything from source or messing about with RPATHs!
Heck, you can even cross compile go code for any architecture to another one (even for different OSes), and docker would be useless there unless docker has mechanisms to bind qemu-$ARCH with containers and binfmt.
Are you running on bare servers? Sure, a Go binary and a script is fine.
I understand what the OP is saying but not sure they get this context.
If I were working in that world still I might have that single binary, and a script, but I'm old school and would probably make an RPM package and add a systemd unit file and some log rotate configs too!
So people are building docker "binaries", that depend on docker installed on the host, to run a container inside a container on the host– or even better, on a non-linux host, all of that then runs in a VM on the host... just... to run a golang application that is... already compiled to a binary?
Of course you can do it directly on the machine but maybe you don't need containers then.
In the same vein: people put stuff within a box, which is then put within another bigger box, inside a metal container, on top on another floating container. Why? Well, for some that's convenient.
Except it requires people to install Docker.
I.e. download this linux/mac/windows application to your windows/linux/mac computer.
Double-click to run.
Seems like all bits and pieces are already there, just need to put them together.
this works for actual compiled code. no vm, no runtime, no interpreter, no container. native compiled machine code. just download and double-click, no matter which OS you use.
To achieve that you'll need some kind of compatibility layer. Perhaps something like wine? Or WSL? Or a VM?
Then you'll have what we already have with JVM and similar
What do you mean, "requires Windows 11"? What is even "glibc" and why do I need a different version on this Linux machine? How do I tell that the M4 needs an "arm64", why not a leg64 and how is this not amd64?
In other words, it's very simple in theory - but the actual landscape is far, FAR more fragmented than a mere "that's a windows/linux/mac box, here's a windows/linux/mac executable, DONE"
(And that's for an application without a GUI.)
With dependency management systems, docker, package managers.
MacOS and Windows is closed source and that is of course a problem, I guess the first demo would be universally runnable linux executable on Windows.
The other way around is easier, and already exists thanks to Wine and the ability of Linux kernel to register custom executable formats (https://docs.kernel.org/admin-guide/binfmt-misc.html)
It's not that hard to wrap your python/java/whatever app in a polyglot executable that will run on your Linux box, on your Mac, and on your Windows box. Here's a much harder target: "I would like to take this to any of such boxes, of reasonably vanilla config, and get it to run there, or at least crawl. 'Start and catch fire' doesn't count, 'exit randomly' doesn't count." The least problematic way to do this is "assume Java", and even that is wildly unsuccessful (versions and configs and JVMs, oh my!). The second least problematic is "webpage" (unless you are trying to interact with any hardware).
The differences in boxes within an OS are often as large as differences across OSes. Docker was supposed to help with this by "we'll ship your box then," and while the idea works great, the assumption "there's already a working Docker, and/or you can just drop a working Docker" is...not great: you just push everything up a level of abstraction, yet end up with the original problem unsolved and unchanged. (There's an actual solution "ship the whole box, hardware and everything," but the downsides are obvious)
Wired: docker2exe.
Inspired: AppImage.
(I'll show myself out.)
Epskampie•1mo ago
alumic•1mo ago
Pack it in, guys. No magic today.
stingraycharles•1mo ago
harha_•1mo ago
jve•1mo ago
I haven't tried this stuff, but maybe this is something in that direction.
matsemann•1mo ago
To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.
99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.
rzzzt•1mo ago
endofreach•1mo ago
Tip: you could also forward your ssh agent. I remember it was a bit of a pain in the ass on macos and a windows WSL2 setup, but likely worth it for your setup.
lelanthran•1mo ago
I don't understand this requirement/specification; presumably this use-case will not be satisfied by a shell script, but I don't see how.
What are you wanting from this use-case that can't be done with a shell script?
lazide•1mo ago
lelanthran•1mo ago
How's "packing" cli commands into a shell script any different from "packing" CLI commands into a container?
lazide•1mo ago
People generally don’t put stuff that works in whatever environment you’re in on the CLI already into contains. Stuff that doesn’t, of course they do.
Having a convenient shell script wrapper to make that not a pain in the ass, while letting all the environment management stuff still work correctly in a container is convenient.
Writing said wrapper each time, however is a pain in the ass.
Generating one, makes it not such a pain in the ass to use.
So then you get convenient CLI usage of something that needs a container to not be a pain in the ass to install/use.
james_marks•1mo ago
cmeacham98•1mo ago
johncs•1mo ago
Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.
j45•1mo ago
worldsayshi•1mo ago
throwanem•1mo ago
Probably takes a couple minutes, maybe less if you've got a good fast distro mirror nearby. More if you're trying to explain it to a biologist - love those folks, they do great work, incredible parties, not always at home in the digital domain.
Hamuko•1mo ago
drawfloat•1mo ago
regularfry•1mo ago
mrbluecoat•1mo ago
arjvik•1mo ago
[1]: https://github.com/NilsIrl/dockerc
hnuser123456•1mo ago
dheera•1mo ago
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
rcfox•1mo ago
dheera•1mo ago
Can we please go back to the days of sudo dpkg -i foo.deb and then just /usr/bin/foo ?
johnisgood•1mo ago
vinceguidry•1mo ago
throwaway127482•1mo ago
vinceguidry•1mo ago
https://medium.com/@moshedana058/understanding-docker-in-doc...
remram•1mo ago
vinceguidry•1mo ago
ecnahc515•1mo ago
ugh123•1mo ago
dowager_dan99•1mo ago
Hamuko•1mo ago
remram•1mo ago