OP: Learn docker and it stops being an “impenetrable wall.” Face it, you don’t want to use docker (or podman) because you are set in your ways. That’s fine, but it is not an argument for anyone else.
I find that many of these poorly maintained build systems have very little to do with native code. Often it is a node.js server + frontend or in the case of AI, python. Some of this seems related to the current norms in those ecosystems.
I agree that it papers over underlying portability issues, but I personally don’t want to deal with some generous maintainer who built a good piece of working software but with no MacOS support.
Now I say all that, is there another solution I should be looking for doing similar things? Maybe this old man has missed something easier to use.
I do hate how Docker chews up my drive with images though...
So the author thinks it's better for users to do (sometimes tedious) steps to get an application or a set of applications running, just for them to "know how to use linux", while ignoring the fact that Docker/containerization's primarily use case is for the developer side, and the developer needs to know linux to write a working Dockerfile.
I would be very curious to see this done in a robust way. Bash vs PowerShell. All the various installation managers on those OS systems. Permissions as the programs will be going on the OS itself.
When I tried this, granted is a very junior developer, I did not succeed.
This is something you should still have even if using Docker.
I don't knock Docker for making that easy and "tied up with a bow and ribbon" for its users (that's great!), but do agree there are times when you really don't need the extra abstraction layer.
The Author
And this guy runs a bitcoin payment service? Is this the technical level of the people writing critical payment code in the bitcoin ecosystem? Yikes
However, if you want to use a shell script for setup instead of a Dockerfile, and don’t mind terminating and recreating VMs when you change anything, and your DNS is set up well, then yeah, that can work almost as well. I do that, sometimes.
Unfortunately, the real software world doesn’t solve underlying problems, they just want things up and running as soon as possible. So Containerization has proved to be pretty useful.
From a hacker perspective, it’s just boring in the same way I feel about AI. It takes the fun out of crafting software.
If you admit this, then why do you go on to write against docker in such an authoritative tone?
I don’t think you understand docker.
VSCode devcontainers are awesome. They are my default way of working on any new project.
Being able to blow away a container and start with a fresh, reproducible setup - at any time - saves so many headaches.
> 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
> 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.
Unix is only around because of its use at massive enterprise scale. Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business. Nowadays, Docker is the standard packaging system at massive enterprise scale. As such, you should learn to use it
I would say this part is correct.
Your first statement is incorrect, as phrased, but I understand what you meant. Granted, you would have to wipe out all cloud providers using flavors of unix, most phones and macs to reduce the footprint. That being said, it's unpopular as a desktop OS. Phones and Macs hide it so well that most people are unaware of the underlying OS.
My first Linux machine was on my work desk in 1998, while we were running racks of UltraSPARCs in production.
I use docker extensively for local development in all my projects at home and at work. This guy is wrong about multiple things, eg "Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong"
I can access my docker OS from: docker exec -it containername bash (allowing that it has bash).
If the container OS has autocomplete and other GNU tools and features, you can get all the functionality. If you want to build that image or even upgrade the image you have (most containers have access to package management), you have a new image you can use the way you like...which might include running more than one service on the same container. Just like using a script on another unix machine, except without having to set up the physical networking or paying a host.
It's very UNIX-y to provide single entry points to services and run them in relative isolation (changes to one container do not affect the others) by default.
The massive enterprise scale part is more complicated.
First of all, we need to clarify that the "people who should be know how to use Unix" here are developers and system administrators. Most people don't need to know Unix and that's fun. You sometimes see people (I get the feeling the OP might be lowkey one of them) mourning the fact that that everyone should be running Linux and doing everything through the terminal. This is like saying everyone should be driving manual transmission, baking their own bread, growing vegetables in their back yard, building their own computer from parts, sewing their own clothes... you get the story. All of these things could be cool and rewarding, but we lack the time and resources to become proficient at everything. GUI is good for most people.
Now the deal with developers using Unix is a much more complex story. Back in the 1970s Unix wasn't very enterprise-y at all, but gained traction in universities and research labs and started spreading to the business world. Even well into the 1980s, the "real" enterprise was IBM mainframes, with Unix still being somewhat of a rebel, but it was clearly the dominant OS for minicomputers, which were later replaced by (microcomputer-sized but far more expensive) servers and workstations. There were other competitors, such as Lisp Machines and BeOS, but nothing ever came close to taking over Unix.
Back in the 1980s, people were not using Unix on their home computers, because their home computers were _just not powerful enough_ to run Unix. Developers that had the money to spare, certainly did prefer an expensive Unix workstation. So it makes large (for that time) microcomputer software vendors often used Unix workstation to develop the software that was later run on cheaper microcomputer OSes. Microsoft has famously been using their own version of Unix (Xenix) during the 1980s as their main development platform.
This shows the enterprise made a great contribution for popularizing Unix. Back in the 1980s and 1990s there were a few disgruntled users[1] who saw the competition dying before their eyes and had to switch the dominant Unix monoculture (if by "monoculture" you mean a nation going through a 100-sided, 20-front post-apocalyptic civil war). But nobody complained about having to ditch DOS and use an expensive Unix workstation, except, perhaps, for the fact their choice of games to play got a lot slimmer.
This is all great and nice, but back in the 1990s most of the enterprise development moved back to Windows. Or maybe it's more precise to say, the industry grew larger and new developers were using Windows (with the occasional windows command prompt), since it was cheap and good enough. Windows was very much entrenched in the enterprise, as was Unix, but their spheres of market dominance was different. There were two major battlegrounds where Windows was gaining traction (medium sized servers and workstations). Eventually windows has almost entirely lost the servers but decisively won the workstations (only to lose half of them again to Apple later on). The interest part is that Windows was slowly winning over the Enterprise version of Unix, but eventually lost to the open-source Linux.
Looking at this, I think the explanation that Unix won over DOS/Windows CMD/PowerShell (or Mac OS 9 if we want to be criminally anachronistic) is waaaay too simplistic. Sure, Unix's enterprise dominance killed Lisp Machines and didn't leave any breathing space for BeOS, but that's not the claim. DOS was never a real competitor to Unix, and when it comes to newer versions of Windows, they were probably the dominant development platform for a while.
I think Unix won over pure Windows-based flows (whether with GUI or supplemented by windows command-line and even PowerShell) because of these things:
1. It was the dominant OS (except for a short period where Windows servers managed to dominate a sizable chunk of the market) , so you needed to know Unix if you wrote server side code, and it was useful to run Unix locally.
2. Unix tools were generally more reliable. Back in the 1990s and 2000s, Windows did have some powerful GUI tools, but GUI tools suffer when it came to reproducibility, knowledge transfer and productivity. It's a bit counterintuitive, but it's quite obvious if you think about it: having to locate some feature in a deeply nested menu or settings dialog and turn it on, is more complex than just adding a command line flag or setting an environment variable.
3. Unix tools are more composable. The story of small tools doing-one-thing-well and piping output is well known, but it's not just that. For instance, compare Apache httpd which had a textual config file format to IIS on Windows which had proprietary configuration database which often got corrupted. This meant that third-party tool integration, version control, automation and configuration review were all simpler on Apache httpd. This is just one example, but it applies to the vast majority of Windows tools back then. Windows tools were islands built on shaky foundations, while Unix tools were reliable mountain fortresses. They were often rough around the edges, but they turned out to be better suited for the job.
4. Unix was always dominant in teaching computer science. Almost all universities taught Unix classes and very few universities taught Windows. The students were often writing their code on Windows and later uploading their code to a Unix server to compile (and dealing with all these pesky line endings that were all wrong). But they did have to familiarize themselves with Unix.
I think all of these factors (and probably a couple of others) brought in the popularization and standardization of Unix tools as the basis for software development in the late 2000s and early 2010s.
[1] See the UNIX-Hater's Handbook: https://web.mit.edu/~simsong/www/ugh.pdf
docker is not secure. It has no "real" security boundry, and any malicious actor could have you run a docker image that is just as malware as an executable. Like locks on doors, it just keeps out the honest people. So i say effort spent trying to secure it is wasted.
> backup
if you have data in the docker instance, you have to use volume mounts, and then backup that volume mount. I say it's easier to backup than an installed app, as you cannot be sure that it didnt write somewhere else their data!
If you're upset with your tooling in your Docker image, just make it so you can become root in it, make sure it has debug tooling, and so on and so forth. Nothing stops you from running `updatedb` and `locate` in it. It's just an overlayfs for the fs nothing fancy.
I understand somewhat the urge for this. There is some containerization overhead and at a small prop trading shop we wouldn't do it (apart from the annoyance of plumbing onload etc. I never figured out how to control scheduling properly) but for most things containers are a godsend.
But ephemeral changes (e.g. inside the running container) are the opposite of statelessness in the comment you are responding to?
And if you have required intricate custom changes in mounted host volumes (config, ...) that are not living alongside the compose file in the same repository, you can have "statefulness" that survives killing the containers.
Then you get into politics: you might need change XYZ for your feature, but you don't own that common image and have to rely on someone else to deploy it, so until then it's manual patches.
Not that I think Docker should always be used. It’s a simple piece tech on the surface but explodes in complexity the more complicated you try to get.
All that said, this article feels detached from reality.
I only found that it can use qemu to build or run images for a different cpu architecture that your computer.
Why would it use qemu ? Docker is not doing virtualization but containerization
IMHO, that's the way to go. Instead of hundreds of megabytes or even gigabytes, we're talking kilobytes, sometimes megabytes for each unit of compute.
Actually sandboxed environments per component.
Add/remove permissions of who can execute what at runtime.
But then again, I'm biased towards modern tech and while it often turns out I was right on the money, it's not always the case.
Do your own research and such, but FYI about wasmCloud.
Lack of real moderation and reddit tier opinions like this are why I no longer visit this site on a daily or even regular basis.
This is (basically) burger king, you can have it your way, you just have to actually learn some new shit once in a while and not just refuse to ever pick up or learn anything new.
Am I missing something? Isn't this as easy as:
docker exec -it --user <username> <container> /bin/bash
I used this just yesterday to get shell access to a Docker container. From there I have full access to the filesystem.
The author didn't seem to research how to use Docker before writing this.
However, I would think you would want to edit the Dockerfile instead so that you fix it every time you restart the container.
But I think the whole point of this post is that the author has no idea how docker works and is mad about having to learn docker things, so nobody should use them. Never mind the entirety of cloud infrastructure running in containers and doing amazing things. Never mind being able to duplicate state across tons of different servers at the same time. It shows that the author isn't into making infrastructure at scale, and has no idea how incredible docker has been to software development, CI/CD pipelines, and deployment / release infrastructure as a whole.
Regarding a file system, in most docker containers you should be able to run "docker exec -ti <if> sh" and you have a shell inside the container, where you *have autocomplete*,and can *run linux commands like locate*.
Regarding configuration files, that's an application issue, 99% of applications I run with docker use configuration files, because that just how you manage software. So either your BTCPay thing doesn't have a configuration file, and it would be the same than if you didn't use Docker, or it has one and you didn't know you could mount it inside the container.
And regarding the "fake" reasons :
> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.
Um, no ? Because between "knowing the environment my code runs in" and "not knowing the environment my code runs in" of course the first option is better and easier to reason about.
> Containers can only be “easier to manage” when they strip away all of the user’s ability to manage in the normal unix-way, and that is relatively unmissed.
By unix way what do you mean ? The container is a process, you can manage the process the unix way.
The focus is in the process' environment, which is better if the end user *doesn't* have to manage it.
> Containerization makes software an opaque box where you are ultimately at the mercy of what graphical settings menus have been programed into the software. It is the nature of containers that bugs can never been fixed by users, only the official development team.
I think you just don't know how to use Docker to edit the files of your application, but it's really as easy as just editing files on linux because *the container is really just using a linux filesystem*
> People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
Did you miss the fact that you need to know how to use linux to write a working Dockerfile ? Because it still runs linux !
I've created a (working) docker image and even if the stuff in the dockerfile breaks, I still have the image and can run the damn thing.
Though there are many reasons why you wouldn’t want to go in and delete a file since that won’t persist or be reproducible.
Regarding security, it depends on how you set up your containers. If you just run them all with default settings - the root user; and give them all the permissions, then yeah, quite insecure. But if you spend an extra minute and create a new non-root user for each container and restrict the permissions then it's quite secure. There are plenty of docker hardening tutorials.
Regarding ease of setup, it took me a while to learn docker. Setting up first containers took very long time. But now, it's quick and easy. I have templates for the docker compose file and I know how to solve all the common issues.
Regarding ease of management, it's very easy. Most of my containers is setup once and forget. Upgrading an app just requires changing the version in the docker-compose file.
Imagine pairing with a mid/Sr and watching them scroll up 40 commands in the terminal and they are complaining that bash won't let them up-arrow 10 lines at a time. In this case, someone writes 5000 words about how they can't get certbot working with their docker setup. They would benefit a lot from working with someone who knows what they are doing.
The key evidence for this claim being wrong is looking at where containerization was first developed. At least as far as I know, the first OS to introduce containers was FreeBSD with its jails mechanism in 1999. FreeBSD is a Unix-based operating system, that is quite decidedly non-enterprise.
Containers are categorically not meant for "Windows developers who don't know Unix". You still need to understand Unix in order to run containers efficiently, perhaps even more so. They may produce a lower barrier of entry to get something to kinda-sorta-work than the classic "wget https://foo.bar/foo.tar.gz && tar xvzf foo.targz && cd foo && ./configure && make && make install", but that doesn't mean the technology is bad.
I think the OP is confusing several issues like containers overuse (which does happens sometimes), certain tools being more complex than they need to (-ahem- certbot), lack of experience in configuring and orchestrating containers, and the fact that inspecting and debugging containers requires an additional set of tools or techniques.
I agree with one thing: you shouldn't be using containers for everything. If you install all your tools as containers, performance will suffer and interoperability will become harder. On the other hand, when I'm running a server, even my own home server, containers are a blessing. I used to run servers without containers before, and I - for one - do not miss this experience in the slightest.
“I need to build this software stack for Debian 10 on Arm64 but I am running arch on x86” -> Docker container with Debian cross compilation toolchain and all is good. “But I need a modern compiler”, install it in the container, problem solved and you know the system depa match.
“This software is only validated on Ubuntu 24.04”, container.
Everyone has already mentioned have a dev environment that exactly matches prod save hardware, containers.
I agree, for a lot of things we don't need containers, and running apps natively is so easy on modern Linux distributions
That's 5 minutes of my life in not getting back.
The Dockerfile is a description of a reproducible build, and a docker-compose.yml file documents how running services interact, which ports are exposed, any volumes that are shared, etc.
It’s all too easy for config knowledge to be siloed in people. I got the impression that the Author prefers tinkering with pet servers.
It's not inherently reproducible but it can potentially be made so.
dontTREATonme•9h ago
I’m also confused about the claim that there is no config file… everyone I know uses docker compose, that’s really the only right way to use docker, using a single docker command is for testing or something, if you’re actually using the app long term, use docker compose. Also most apps I use do have a specific place you can set for configuration in the docker compose file.
aspbee555•8h ago
I still have email server setups I would never dare try to touch with docker, but I know it is possible
like a lot of things it has its uses and it's really good at what it does
coderatlarge•8h ago
unsnap_biceps•7h ago
coderatlarge•53m ago
https://katacontainers.io/
kaptainscarlet•8h ago
flkiwi•7h ago
“If a program can be easily installed on Debian and (nowadays) installed on Arch Linux, that covers basically all Linux users.”
gryn•6h ago