frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Production of hydroxyapatite from urine by a synthetic osteoyeast platform

https://www.nature.com/articles/s41467-025-59416-8
1•PaulHoule•1m ago•0 comments

Possible End to End to End Encryption: Come Help

https://berthub.eu/articles/posts/possible-end-to-end-to-end-come-help/
1•nickslaughter02•1m ago•0 comments

Realistic Text-to-Speech Arena

https://tts-arena.vercel.app/
1•copypirate•3m ago•0 comments

A scale bridging journey into the nanocosmos of a Ni-base superalloy [video]

https://www.youtube.com/watch?v=wYHch5QIWTQ
1•dgroshev•8m ago•0 comments

Make your configuration local, immutable and dumb

https://blog.natfu.be/managing-settings-in-python-projects/
1•NeutralForest•9m ago•0 comments

Argentina GDP growth fastest since 2022, though lagging forecasts

https://www.reuters.com/world/americas/argentinas-economy-expands-58-q1-year-on-year-2025-06-23/
1•bilsbie•9m ago•0 comments

Show HN: A Thermodynamic Model of System Health (Entropy and Adaptability)

https://github.com/projectvitalis/qih
1•projectvitalis•10m ago•0 comments

Gajim 2.3.2 has been released – GTK XMPP/Jabber Chat Client – Communication

https://gajim.org/posts/2025-07-02-gajim-2.3.1-2.3.2-released/
1•neustradamus•12m ago•0 comments

Show HN: The Recursive Mind – Why Intelligence Leads to Existential Dread

https://ashes09.substack.com/p/the-recursive-mind-why-intelligence
1•ahamal•13m ago•0 comments

UK puts out tender for space robot to de-orbit satellites

https://www.theregister.com/2025/07/06/uk_puts_out_tender_for_deorbit_mission/
1•rntn•14m ago•0 comments

'Vibe Coder' Who Doesn't Know How to Code Keeps Winning Hackathons in San Fran

https://developers.slashdot.org/story/25/07/06/0357235/vibe-coder-who-doesnt-know-how-to-code-keeps-winning-hackathons-in-san-francisco
2•dragonbonheur•14m ago•1 comments

An Afghan in Chicago Finds Success Selling Saffron from Back Home

https://www.wsj.com/business/entrepreneurship/saffron-importing-chicago-business-cbb2ee38
1•bookofjoe•15m ago•1 comments

Show HN: Best AI Tool Finder

https://bestaitoolfinder.com/
1•ShinMatsura•16m ago•0 comments

Overclocking LLM Reasoning: Monitoring and Controlling LLM Thinking Path Lengths

https://royeisen.github.io/OverclockingLLMReasoning-paper/
1•limoce•17m ago•0 comments

Guide to Ukraine's Long Range Attack Drones

http://www.hisutton.com/Ukraine-OWA-UAVs.html
5•Bluestein•18m ago•0 comments

I Left the U.S. for India and Built a $23M Burrito Business [video]

https://www.youtube.com/watch?v=3enHvs7VaN8
3•thisislife2•19m ago•0 comments

The most otherworldly, mysterious forms of lightning on Earth

https://www.nationalgeographic.com/science/article/lightning-sprites-transient-luminous-events-thunderstorms
3•Anon84•20m ago•0 comments

CellularLab – A Modern Android iPerf3 App with TCP/UDP Testing and AI Analysis

1•abhi5h3k•20m ago•0 comments

Show HN: Rulesync – Add support for .ignore and MCP servers

https://github.com/dyoshikawa/rulesync
1•dyoshikawa•23m ago•0 comments

The Bille, a Self-Righting Tetrahedron That Nobody Was Sure Could Exist

https://www.iflscience.com/meet-the-bille-a-self-righting-tetrahedron-that-nobody-was-sure-could-exist-79876
1•TMEHpodcast•23m ago•0 comments

Trae Agent

https://github.com/bytedance/trae-agent
1•carlos-menezes•25m ago•0 comments

Show HN: New cross framework omni-REPL

https://limber.glimdown.com
1•nullvoxpopuli•28m ago•0 comments

Stop killing games and the industry response

https://blog.kronis.dev/blog/stop-killing-games
3•LorenDB•30m ago•0 comments

Six months into congestion pricing, more cars are off the road

https://ny1.com/nyc/all-boroughs/traffic_and_transit/2025/07/05/six-months-into-congestion-pricing--more-cars-are-off-the-road--report-says
2•geox•35m ago•0 comments

Hybrid workers like me can have a little lapdesk, as a treat

https://www.creativebloq.com/tech/chairs-desks/hybrid-workers-like-me-can-have-a-little-lapdesk-as-a-treat
6•Bluestein•35m ago•1 comments

The Scam of Age Verification [18]

https://pornbiz.com/post/17/the_scam_of_age_verification
2•bananamango•36m ago•0 comments

Get the location of the ISS using DNS

https://shkspr.mobi/blog/2025/07/get-the-location-of-the-iss-using-dns/
23•8organicbits•37m ago•2 comments

David Suzuki on Climate Change

https://johncarlosbaez.wordpress.com/2025/07/06/david-suzuki-on-climate-change/
2•chmaynard•40m ago•0 comments

Aulico – Cursor for Traders

https://www.aulico.co
1•vasileapeste•40m ago•0 comments

Openevolve: Open-Source Implementation of AlphaEvolve

https://github.com/codelion/openevolve
1•simonpure•43m ago•0 comments
Open in hackernews

Basically Everyone Should Be Avoiding Docker

https://lukesmith.xyz/articles/everyone-should-be-avoiding-docker/
81•Fred34•10h ago

Comments

dontTREATonme•9h ago
I like docker because it makes it super easy to try out apps that I don’t necessarily know that I want and I can just delete it.

I’m also confused about the claim that there is no config file… everyone I know uses docker compose, that’s really the only right way to use docker, using a single docker command is for testing or something, if you’re actually using the app long term, use docker compose. Also most apps I use do have a specific place you can set for configuration in the docker compose file.

aspbee555•8h ago
it really does allow easy setup with compose, multiple containers, different versions, etc. I have been setting up linux servers and desktops for decades but docker made it way easier for a lot of things

I still have email server setups I would never dare try to touch with docker, but I know it is possible

like a lot of things it has its uses and it's really good at what it does

coderatlarge•8h ago
i love the convenience and ease-of-use but worry about the security compared to full-blown vm
unsnap_biceps•7h ago
Kata containers is a nice compromise. Each container is run as a microvm
coderatlarge•53m ago
thanks! so they achieve the convenience of docker with the added security of full-blown kvm? trading some perf and resource-use?

https://katacontainers.io/

kaptainscarlet•8h ago
The title should be the opposite imo. Why everyone should use docker
flkiwi•7h ago
After reading this, I assumed this was some level of parody:

“If a program can be easily installed on Debian and (nowadays) installed on Arch Linux, that covers basically all Linux users.”

gryn•6h ago
in addition, docker compose also support reading env variables / .env files from outside that you can use for configuration inside the docker compose file.
adastra22•9h ago
This has rather strong “old man yelling at clouds” vibes.

OP: Learn docker and it stops being an “impenetrable wall.” Face it, you don’t want to use docker (or podman) because you are set in your ways. That’s fine, but it is not an argument for anyone else.

mrbluecoat•8h ago
Agreed. Stopped reading after "or containerization more generally"
edfletcher_t137•8h ago
OP's YT is full-on "old man yelling at clouds" https://www.youtube.com/c/lukesmithxyz
neuronexmachina•7h ago
Almost literally so with his "Boomer Rants in Woods" series.
0xDEAFBEAD•6h ago
Someone's gotta do it.
palmfacehn•8h ago
When I encounter a README/INSTALL that advises Docker, I start to suspect that the package is a mess. I'm sure there are legitimate usages within enterprise-y scenarios, but it has commonly become a way to paper over other issues.
lqstuart•8h ago
This is definitely true. You see it constantly in deep learning applications, where eg NVIDIA’s fancy fp8 stuff needs C++11 ABI, so they need to compile torch from source, which means everything with C++ dependencies on torch needs to be recompiled, and eventually it’s much easier to ship a container image. It can be done with an Ansible playbook, sure, if you don’t have to ever do anything else with your machine.
kaptainscarlet•8h ago
One of the best things about docker that is not mentioned often enough is how it keeps your host machine clean.
palmfacehn•7h ago
Isn't it also true that not every library needs to be deployed system-wide?

I find that many of these poorly maintained build systems have very little to do with native code. Often it is a node.js server + frontend or in the case of AI, python. Some of this seems related to the current norms in those ecosystems.

markerz•8h ago
I think one big issue is fundamental build differences between Linux and macOS. So many things build beautifully for one platform and not another, and it’s really frustrating to just want to use some piece of software that is not tested on another platform because the devs just don’t have a Mac.

I agree that it papers over underlying portability issues, but I personally don’t want to deal with some generous maintainer who built a good piece of working software but with no MacOS support.

anonzzzies•8h ago
Yep, people here hate the organically built Lisp save and die images, but many open source and in company repositories with docker are very similar: they got it working on their computer inside docker through manually changing, rearranging and writing scripts, many of which reverse things from other scripts, update packages installed higher up in the same script etc etc and then just published it as it works. So trying 'make' or whatever outside the container simply errors out as it wont work anywhere else but in the very specific confines of the linux inside the container. The difference is pretty clear: if the readme starts with how to run the project without docker but also has a docker, the setup is clean, if it starts with docker and then, maybe, points to a separate docs dir with random 'notes' how to run locally, you know it will be a mess you probably never want to try (outside running docker).
sdenton4•7h ago
"We paid zero attention to the absolute hell of dependencies we were creating for ourselves, and eventually decided to just ship a docker image of the smart intern's laptop "
add-sub-mul-div•8h ago
Guy who has only heard internet catchphrase arguments encountering an unpleasant idea: "Getting a lot of 'old man yelling at clouds' vibes from this..."
kordlessagain•8h ago
As an expert at yelling at clouds, I can say I love Docker and couldn't/wouldn't do development without it. Besides, most of my cloud stuff nowadays is targeting Cloud Run, so why shouldn't I use it to build?

Now I say all that, is there another solution I should be looking for doing similar things? Maybe this old man has missed something easier to use.

I do hate how Docker chews up my drive with images though...

sanex•8h ago
"Everyone else needs to learn and use Linux so I don't have to learn Docker"
dima55•8h ago
Kinda. Yeah. Ignorance really isn't a virtue, and at some point bending over backwards to support people that don't want to learn things is counterproductive.
bawolff•8h ago
Except based on this article, i don't think the author knows linux very well either.
LelouBil•7h ago
Yeah, what I don't understand is that he seems to completely ignore the fact that a docker containers is still running linux, just isolated in a new filesystem (and more technologies I don't know a lot about like namespaces and stuff)

So the author thinks it's better for users to do (sometimes tedious) steps to get an application or a set of applications running, just for them to "know how to use linux", while ignoring the fact that Docker/containerization's primarily use case is for the developer side, and the developer needs to know linux to write a working Dockerfile.

MattGaiser•8h ago
> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.

I would be very curious to see this done in a robust way. Bash vs PowerShell. All the various installation managers on those OS systems. Permissions as the programs will be going on the OS itself.

When I tried this, granted is a very junior developer, I did not succeed.

rkagerer•7h ago
Did it for decades before Docker was a thing. Even for projects with lots of complex dependencies, some tied into the OS. Basically you just need a well documented, thorough and up to date set of steps for setting up the environment.

This is something you should still have even if using Docker.

I don't knock Docker for making that easy and "tied up with a bow and ribbon" for its users (that's great!), but do agree there are times when you really don't need the extra abstraction layer.

slashdave•8h ago
Should this be titled "Basically everyone who doesn't know how to use Docker should be avoiding Docker"?
MadnessASAP•8h ago
Only people who dont know how to use Linux use Docker because Docker makes Linux easy. This is bad because I don't know how to use Docker.

The Author

udev4096•7h ago
I mean most people still see it as a blackbox. The magic is in namespaces, cgroups, etc. I would recommend reading this series: https://iximiuz.com/en/posts/container-learning-path/
Freedom2•7h ago
Agreed. I really tire of flat, authoritative statements with no room for context or clarification. Perhaps that's what's needed to succeed in a VC world, but it's the easiest way to get me to dismiss the author's opinion because they leave no room for discussion.
hadlock•8h ago
This should have been titled "basically everyone should be avoiding containers". I thought this was an article about the company Docker
amrocha•8h ago
Very poor article. Doesn't even acknowledge the main reason people use containers, which is to reliably setup, manage, and replicate an environment regardless of the machine it’s running on.

And this guy runs a bitcoin payment service? Is this the technical level of the people writing critical payment code in the bitcoin ecosystem? Yikes

wrs•8h ago
I have been using Unix since 1983, and Linux since version <1.0, and I would respectfully suggest that the author has missed the point of Docker.

However, if you want to use a shell script for setup instead of a Dockerfile, and don’t mind terminating and recreating VMs when you change anything, and your DNS is set up well, then yeah, that can work almost as well. I do that, sometimes.

pipeline_peak•8h ago
The only problem I have with containerization is it’s not optimal. You’re adding all sorts of unnecessary overhead that’s not needed, often times to avoid an underlying problem.

Unfortunately, the real software world doesn’t solve underlying problems, they just want things up and running as soon as possible. So Containerization has proved to be pretty useful.

From a hacker perspective, it’s just boring in the same way I feel about AI. It takes the fun out of crafting software.

comradesmith•8h ago
> You might say that doing such a little operation becomes easier after being more familiar with containerization—I’m sure that’s true, absolutely.

If you admit this, then why do you go on to write against docker in such an authoritative tone?

I don’t think you understand docker.

SeanAnderson•8h ago
That... was not a very convincing article! It came across as a frustrated op-ed where the author intentionally focused on the negatives rather than steel manning their own argument. Any potential positives were handwaved as out-of-scope.

VSCode devcontainers are awesome. They are my default way of working on any new project.

Being able to blow away a container and start with a fresh, reproducible setup - at any time - saves so many headaches.

neuronexmachina•7h ago
It kind of seemed like this might have been the first time the author tried doing anything with Docker containers. And yeah... if you're trying to work with and modify a Docker container the same way you'd work with a VM, you're going to have a bad time.
ranger207•8h ago
> There are basically only two “real” reasons to use Docker or containerization more generally:

> 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux.

> 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.

Unix is only around because of its use at massive enterprise scale. Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business. Nowadays, Docker is the standard packaging system at massive enterprise scale. As such, you should learn to use it

Supermancho•7h ago
> Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business

I would say this part is correct.

Your first statement is incorrect, as phrased, but I understand what you meant. Granted, you would have to wipe out all cloud providers using flavors of unix, most phones and macs to reduce the footprint. That being said, it's unpopular as a desktop OS. Phones and Macs hide it so well that most people are unaware of the underlying OS.

My first Linux machine was on my work desk in 1998, while we were running racks of UltraSPARCs in production.

I use docker extensively for local development in all my projects at home and at work. This guy is wrong about multiple things, eg "Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong"

I can access my docker OS from: docker exec -it containername bash (allowing that it has bash).

If the container OS has autocomplete and other GNU tools and features, you can get all the functionality. If you want to build that image or even upgrade the image you have (most containers have access to package management), you have a new image you can use the way you like...which might include running more than one service on the same container. Just like using a script on another unix machine, except without having to set up the physical networking or paying a host.

It's very UNIX-y to provide single entry points to services and run them in relative isolation (changes to one container do not affect the others) by default.

unscaled•6h ago
macOS (since version 10) is Unix. You can say most macOS users are not using the terminal or that back in the 1990s and 1980s, all the popular desktop OSes weren't based on Unix and that would probably be more accurate.

The massive enterprise scale part is more complicated.

First of all, we need to clarify that the "people who should be know how to use Unix" here are developers and system administrators. Most people don't need to know Unix and that's fun. You sometimes see people (I get the feeling the OP might be lowkey one of them) mourning the fact that that everyone should be running Linux and doing everything through the terminal. This is like saying everyone should be driving manual transmission, baking their own bread, growing vegetables in their back yard, building their own computer from parts, sewing their own clothes... you get the story. All of these things could be cool and rewarding, but we lack the time and resources to become proficient at everything. GUI is good for most people.

Now the deal with developers using Unix is a much more complex story. Back in the 1970s Unix wasn't very enterprise-y at all, but gained traction in universities and research labs and started spreading to the business world. Even well into the 1980s, the "real" enterprise was IBM mainframes, with Unix still being somewhat of a rebel, but it was clearly the dominant OS for minicomputers, which were later replaced by (microcomputer-sized but far more expensive) servers and workstations. There were other competitors, such as Lisp Machines and BeOS, but nothing ever came close to taking over Unix.

Back in the 1980s, people were not using Unix on their home computers, because their home computers were _just not powerful enough_ to run Unix. Developers that had the money to spare, certainly did prefer an expensive Unix workstation. So it makes large (for that time) microcomputer software vendors often used Unix workstation to develop the software that was later run on cheaper microcomputer OSes. Microsoft has famously been using their own version of Unix (Xenix) during the 1980s as their main development platform.

This shows the enterprise made a great contribution for popularizing Unix. Back in the 1980s and 1990s there were a few disgruntled users[1] who saw the competition dying before their eyes and had to switch the dominant Unix monoculture (if by "monoculture" you mean a nation going through a 100-sided, 20-front post-apocalyptic civil war). But nobody complained about having to ditch DOS and use an expensive Unix workstation, except, perhaps, for the fact their choice of games to play got a lot slimmer.

This is all great and nice, but back in the 1990s most of the enterprise development moved back to Windows. Or maybe it's more precise to say, the industry grew larger and new developers were using Windows (with the occasional windows command prompt), since it was cheap and good enough. Windows was very much entrenched in the enterprise, as was Unix, but their spheres of market dominance was different. There were two major battlegrounds where Windows was gaining traction (medium sized servers and workstations). Eventually windows has almost entirely lost the servers but decisively won the workstations (only to lose half of them again to Apple later on). The interest part is that Windows was slowly winning over the Enterprise version of Unix, but eventually lost to the open-source Linux.

Looking at this, I think the explanation that Unix won over DOS/Windows CMD/PowerShell (or Mac OS 9 if we want to be criminally anachronistic) is waaaay too simplistic. Sure, Unix's enterprise dominance killed Lisp Machines and didn't leave any breathing space for BeOS, but that's not the claim. DOS was never a real competitor to Unix, and when it comes to newer versions of Windows, they were probably the dominant development platform for a while.

I think Unix won over pure Windows-based flows (whether with GUI or supplemented by windows command-line and even PowerShell) because of these things:

1. It was the dominant OS (except for a short period where Windows servers managed to dominate a sizable chunk of the market) , so you needed to know Unix if you wrote server side code, and it was useful to run Unix locally.

2. Unix tools were generally more reliable. Back in the 1990s and 2000s, Windows did have some powerful GUI tools, but GUI tools suffer when it came to reproducibility, knowledge transfer and productivity. It's a bit counterintuitive, but it's quite obvious if you think about it: having to locate some feature in a deeply nested menu or settings dialog and turn it on, is more complex than just adding a command line flag or setting an environment variable.

3. Unix tools are more composable. The story of small tools doing-one-thing-well and piping output is well known, but it's not just that. For instance, compare Apache httpd which had a textual config file format to IIS on Windows which had proprietary configuration database which often got corrupted. This meant that third-party tool integration, version control, automation and configuration review were all simpler on Apache httpd. This is just one example, but it applies to the vast majority of Windows tools back then. Windows tools were islands built on shaky foundations, while Unix tools were reliable mountain fortresses. They were often rough around the edges, but they turned out to be better suited for the job.

4. Unix was always dominant in teaching computer science. Almost all universities taught Unix classes and very few universities taught Windows. The students were often writing their code on Windows and later uploading their code to a Unix server to compile (and dealing with all these pesky line endings that were all wrong). But they did have to familiarize themselves with Unix.

I think all of these factors (and probably a couple of others) brought in the popularization and standardization of Unix tools as the basis for software development in the late 2000s and early 2010s.

[1] See the UNIX-Hater's Handbook: https://web.mit.edu/~simsong/www/ugh.pdf

leakycap•8h ago
Docker has so much overhead (complexity and technical) - I hear people recommend it for simplicity all the time and assume they have out-of-date, insecure setups ... Docker containers require more setup to secure and backup in my experience.
chii•7h ago
> require more setup to secure

docker is not secure. It has no "real" security boundry, and any malicious actor could have you run a docker image that is just as malware as an executable. Like locks on doors, it just keeps out the honest people. So i say effort spent trying to secure it is wasted.

> backup

if you have data in the docker instance, you have to use volume mounts, and then backup that volume mount. I say it's easier to backup than an installed app, as you cannot be sure that it didnt write somewhere else their data!

periodjet•8h ago
Very unconvincing. With container images, you can use docker compose or k8s to declare and deploy your entire service architecture. That is… massively useful.
arjie•8h ago
For the most part people decide to create Docker containers like they're deploying everything to heavy prod so they chop the image down super narrow. Just solve for your own use-case. I run my blog and other things on a homeserver with a Cloudflare reverse proxy in front of it and I don't use `docker` strictly but I do use systemd quadlets and podman and it's the same thing.

If you're upset with your tooling in your Docker image, just make it so you can become root in it, make sure it has debug tooling, and so on and so forth. Nothing stops you from running `updatedb` and `locate` in it. It's just an overlayfs for the fs nothing fancy.

I understand somewhat the urge for this. There is some containerization overhead and at a small prop trading shop we wouldn't do it (apart from the annoyance of plumbing onload etc. I never figured out how to control scheduling properly) but for most things containers are a godsend.

udev4096•7h ago
I am not entirely sure but afaik overlayfs is only if you use docker volumes and not bind mounts
s_ting765•8h ago
Old man yells at modern packaging solutions.
kousthub•8h ago
Why would you edit or delete files inside a running container? It’s ephemeral and supposed to be used stateless. You can attach volumes from host system if you need persistence.
dima55•7h ago
Because there are different use cases. About 100% of the people I talk to that use docker, are using it to make a separate set of dependencies available in a possibly-different distribution. For THOSE people, the ephemeral, stateless nature of docker is a huge detriment in usability, and a chroot would be far more appropriate. I see docker users waste countless hours working around its statelessness. All the time. YMMV
moritzwarhier•5h ago
> ephemeral, stateless nature of docker is a huge detriment in usability, and a chroot would be far more appropriate

But ephemeral changes (e.g. inside the running container) are the opposite of statelessness in the comment you are responding to?

And if you have required intricate custom changes in mounted host volumes (config, ...) that are not living alongside the compose file in the same repository, you can have "statefulness" that survives killing the containers.

hakfoo•7h ago
At least one use case I've seen for Docker is to replicate the massively microservice oriented system. If your app is deployed across 200 different containers in prod, you're going to be testing it by spinning up the same basic containers with Docker in dev. That means a lot of incremental changes-- trivial stuff like adding transient logging or bypassing default flows-- inside the container as part of the development process.

Then you get into politics: you might need change XYZ for your feature, but you don't own that common image and have to rely on someone else to deploy it, so until then it's manual patches.

kylegalbraith•7h ago
This reads less like an article about the downsides of Docker and more like an article about how someone doesn’t fully understand Docker.

Not that I think Docker should always be used. It’s a simple piece tech on the surface but explodes in complexity the more complicated you try to get.

All that said, this article feels detached from reality.

cmdrk•7h ago
Containerization is amazingly great for scientific computing. I don’t ever want to go back to doing the make && make install dance and praying I’ve got my dependency ducks in a row.
mike_d•7h ago
The only real feature of Docker is the ability to keep unmaintained software running as the world around it moves forward. Academics could do the same thing by just distributing read only VMs as well.
maxk42•7h ago
Containerization is great. Docker != containerization. Most people don't even know it runs qemu under the hood.
LelouBil•7h ago
Do you have info on this ?

I only found that it can use qemu to build or run images for a different cpu architecture that your computer.

Why would it use qemu ? Docker is not doing virtualization but containerization

mootoday•7h ago
Wasm Components, deployed on wasmCloud.

IMHO, that's the way to go. Instead of hundreds of megabytes or even gigabytes, we're talking kilobytes, sometimes megabytes for each unit of compute.

Actually sandboxed environments per component.

Add/remove permissions of who can execute what at runtime.

But then again, I'm biased towards modern tech and while it often turns out I was right on the money, it's not always the case.

Do your own research and such, but FYI about wasmCloud.

udev4096•7h ago
Docker gives me at least some form of isolation. Yes, I know container escapes are possible but I run gvisor on top of it which is a strong sandbox. If I was just running as a systemd service as a user, all the attacker needs is a linux LPE, which is in abundance
routelastresort•7h ago
Containerization is so widespread... from almost every single programming book and beginner class to the foundation for most of the Internet running today. As a technology, it's very easy to teach. I suggest keeping an open mind with relevant and ubiquitous technology and at least coming up with a compelling alternative for any of the use wildly popular use cases that have made it commodity tech at this point.

Lack of real moderation and reddit tier opinions like this are why I no longer visit this site on a daily or even regular basis.

kesor•7h ago
Skill issue
rpcope1•7h ago
Among the other wrong/dumb things in the post, equivocating all containers with "Docker" reveals a lot of ignorance. You can have your nice existing stuff, and still use containers (and reap the benefits) if you just use LXC. You get the advantages of not needing to pollute a host with everything (including potentially incompatible dependencies), incorporating cgroups and namespaces and ease of migration (really easy to bring LXC containers to a new host), while not having to buy wholesale into all of the parts of Docker you don't want.

This is (basically) burger king, you can have it your way, you just have to actually learn some new shit once in a while and not just refuse to ever pick up or learn anything new.

happytoexplain•7h ago
I love docker. It's indispensable. But it's bitterly hilarious that we are in a place where that's true. I hope in 20 years we have figured out the problems docker solves without another "wrap the whole thing in an abstraction layer" solution. But I had hoped we'd be there by 2010.
kmoser•7h ago
> Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong—in fact, that’s “the point.” I can’t use typical commands like updatedb/locate/find to find what I need. I have to run a command with a massive prefix specific to that container. I don’t have tab completion when running Docker container commands, so when I inevitably mistype while searching for the file or attempting to delete it, I have to re-edit a multi-line command.

Am I missing something? Isn't this as easy as:

docker exec -it --user <username> <container> /bin/bash

I used this just yesterday to get shell access to a Docker container. From there I have full access to the filesystem.

LelouBil•7h ago
No you're not missing anything, aside from the small part of containers that are "FROM scratch" and don't have a shell binary, you can do the command you wrote.

The author didn't seem to research how to use Docker before writing this.

blackjack_•7h ago
Yes if they want to edit the running container config, that is exactly what to do. Also, if you are just running a mounted volume for the configs, you don't even have to go that far. You can just edit the mounted volume on the host machine and it will show up immediately in the container.

However, I would think you would want to edit the Dockerfile instead so that you fix it every time you restart the container.

But I think the whole point of this post is that the author has no idea how docker works and is mad about having to learn docker things, so nobody should use them. Never mind the entirety of cloud infrastructure running in containers and doing amazing things. Never mind being able to duplicate state across tons of different servers at the same time. It shows that the author isn't into making infrastructure at scale, and has no idea how incredible docker has been to software development, CI/CD pipelines, and deployment / release infrastructure as a whole.

LelouBil•7h ago
The author clearly doesn't know how to use Docker and blames their issues on the tool and even on the concept of containerization !

Regarding a file system, in most docker containers you should be able to run "docker exec -ti <if> sh" and you have a shell inside the container, where you *have autocomplete*,and can *run linux commands like locate*.

Regarding configuration files, that's an application issue, 99% of applications I run with docker use configuration files, because that just how you manage software. So either your BTCPay thing doesn't have a configuration file, and it would be the same than if you didn't use Docker, or it has one and you didn't know you could mount it inside the container.

And regarding the "fake" reasons :

> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.

Um, no ? Because between "knowing the environment my code runs in" and "not knowing the environment my code runs in" of course the first option is better and easier to reason about.

> Containers can only be “easier to manage” when they strip away all of the user’s ability to manage in the normal unix-way, and that is relatively unmissed.

By unix way what do you mean ? The container is a process, you can manage the process the unix way.

The focus is in the process' environment, which is better if the end user *doesn't* have to manage it.

> Containerization makes software an opaque box where you are ultimately at the mercy of what graphical settings menus have been programed into the software. It is the nature of containers that bugs can never been fixed by users, only the official development team.

I think you just don't know how to use Docker to edit the files of your application, but it's really as easy as just editing files on linux because *the container is really just using a linux filesystem*

> People who do not know how to use Unix-based operating systems or specifically GNU/Linux.

Did you miss the fact that you need to know how to use linux to write a working Dockerfile ? Because it still runs linux !

tasuki•7h ago
How about "this has too many dependencies which are tricky to set up and I think they might change under me and I won't be able to run the project anymore"?

I've created a (working) docker image and even if the stuff in the dockerfile breaks, I still have the image and can run the damn thing.

rsolva•7h ago
Everybody should use Podman instead :P
kelvinjps10•7h ago
Why all of the lukesmith posts get flagged even the landchad website thar was just tutorials?
stephenbez•7h ago
Luke, if you want to interactively run commands like “find” like you are used to, you can run: “docker exec -it my_container bash”

Though there are many reasons why you wouldn’t want to go in and delete a file since that won’t persist or be reproducible.

zerof1l•6h ago
I get the impression that the person who wrote this article doesn't know much about docker. Running 2 apps and a certbot can be done without containers easily. Try running 20 apps, some of which depend on having the same dependency but on a different version.

Regarding security, it depends on how you set up your containers. If you just run them all with default settings - the root user; and give them all the permissions, then yeah, quite insecure. But if you spend an extra minute and create a new non-root user for each container and restrict the permissions then it's quite secure. There are plenty of docker hardening tutorials.

Regarding ease of setup, it took me a while to learn docker. Setting up first containers took very long time. But now, it's quick and easy. I have templates for the docker compose file and I know how to solve all the common issues.

Regarding ease of management, it's very easy. Most of my containers is setup once and forget. Upgrading an app just requires changing the version in the docker-compose file.

khaki54•6h ago
Hmmm this reads like the author neither understands Docker or Linux, many of the issues they seem to have is just stuff they don't know the right approach to tackling.

Imagine pairing with a mid/Sr and watching them scroll up 40 commands in the terminal and they are complaining that bash won't let them up-arrow 10 lines at a time. In this case, someone writes 5000 words about how they can't get certbot working with their docker setup. They would benefit a lot from working with someone who knows what they are doing.

unscaled•6h ago
> There are basically only two “real” reasons to use Docker or containerization more generally: > 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux. > 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.

The key evidence for this claim being wrong is looking at where containerization was first developed. At least as far as I know, the first OS to introduce containers was FreeBSD with its jails mechanism in 1999. FreeBSD is a Unix-based operating system, that is quite decidedly non-enterprise.

Containers are categorically not meant for "Windows developers who don't know Unix". You still need to understand Unix in order to run containers efficiently, perhaps even more so. They may produce a lower barrier of entry to get something to kinda-sorta-work than the classic "wget https://foo.bar/foo.tar.gz && tar xvzf foo.targz && cd foo && ./configure && make && make install", but that doesn't mean the technology is bad.

I think the OP is confusing several issues like containers overuse (which does happens sometimes), certain tools being more complex than they need to (-ahem- certbot), lack of experience in configuring and orchestrating containers, and the fact that inspecting and debugging containers requires an additional set of tools or techniques.

I agree with one thing: you shouldn't be using containers for everything. If you install all your tools as containers, performance will suffer and interoperability will become harder. On the other hand, when I'm running a server, even my own home server, containers are a blessing. I used to run servers without containers before, and I - for one - do not miss this experience in the slightest.

jpc0•5h ago
Containers correctly used make things much easier.

“I need to build this software stack for Debian 10 on Arm64 but I am running arch on x86” -> Docker container with Debian cross compilation toolchain and all is good. “But I need a modern compiler”, install it in the container, problem solved and you know the system depa match.

“This software is only validated on Ubuntu 24.04”, container.

Everyone has already mentioned have a dev environment that exactly matches prod save hardware, containers.

huksley•5h ago
So it is not about Docker and its licensing but about containerization its added complexity.

I agree, for a lot of things we don't need containers, and running apps natively is so easy on modern Linux distributions

rawkode•3h ago
The irony here is that the author doesn't know how to use containers, there's nothing specifically Docker there, yet seems to portay a level of Unix knowledge ...

That's 5 minutes of my life in not getting back.

hn_throw2025•3h ago
Great points in this thread, but I would say another advantage of Docker is that of documentation.

The Dockerfile is a description of a reproducible build, and a docker-compose.yml file documents how running services interact, which ports are exposed, any volumes that are shared, etc.

It’s all too easy for config knowledge to be siloed in people. I got the impression that the Author prefers tinkering with pet servers.

rascul•58m ago
> The Dockerfile is a description of a reproducible build

It's not inherently reproducible but it can potentially be made so.