frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
1•breve•41s ago•0 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•3m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•4m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•8m ago•0 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•9m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
2•tempodox•9m ago•0 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•13m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•16m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
2•petethomas•20m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•24m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•40m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•46m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•46m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•49m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•52m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments
Open in hackernews

Show HN: Unregistry – “docker push” directly to servers without a registry

https://github.com/psviderski/unregistry
726•psviderski•7mo ago
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

Comments

koakuma-chan•7mo ago
This is really cool. Do you support or plan to support docker compose?
psviderski•7mo ago
Thank you! Can you please clarify what kind of support you mean for docker compose?
fardo•7mo ago
I assume that he means "rather than pushing up each individual container for a project, it could take something like a compose file over a list of underlying containers, and push them all up to the endpoint."
koakuma-chan•7mo ago
Yes, pushing all containers one by one would not be very convenient.
baobun•7mo ago
The right yq|xargs invocation on your compose file should get you to a oneshot.
koakuma-chan•7mo ago
I would prefer docker compose pussh or whatever
psviderski•7mo ago
That's an interesting idea. I don't think you can create a subcommand/plugin for compose but creating a 'docker composepussh' command that parses the compose file and runs 'docker pussh' should be possible.

My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.

djfivyvusn•7mo ago
You can wrap docker in a bash function that passes through to `command docker` when it's not a compose pussh command.
jillesvangurp•7mo ago
Right now, I use ssh to trigger a docker compose restart that pulls all the latest images on some of my servers (we have a few dedicated hosting/on premise setups). That then needs to reach out to our registry to pull images. So, it's this weird mix of push pull that ends up needing a central registry.

What would be nicer instead is some variation of docker compose pussh that pushes the latest versions of local images to the remote host based on the remote docker-compose.yml file. The alternative would be docker pusshing the affected containers one by by one and then triggering a docker compose restart. Automating that would be useful and probably not that hard.

felbane•7mo ago
I've built a setup that orchestrates updates for any number of remotes without needing a permanently hosted registry. I have a container build VM at HQ that also runs a registry container pointed at the local image store. Updates involve connecting to remote hosts over SSH, establishing a reverse tunnel, and triggering the remote hosts to pull from the "localhost" registry (over the tunnel to my buildserver registry).

The connection back to HQ only lasts as long as necessary to pull the layers, tagging works as expected, etc etc. It's like having an on-demand hosted registry and requires no additional cruft on the remotes. I've been migrating to Podman and this process works flawlessly there too, fwiw.

jlhawn•7mo ago
A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!
rgrau•7mo ago
I use a variant with ssh and some compression:

    docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'
selcuka•7mo ago
If you are happy with bzip2-level compression, you could also use `ssh -C` to enable automatic gzip compression.
selcuka•7mo ago
That method is actually mentioned in their README:

> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server

alisonatwork•7mo ago
On podman this is built in as native command podman-image-scp[0], which perhaps could be more efficient with SSH compression.

[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...

travisgriggs•7mo ago
So with Podman, this exists already, but for docker, this has to be created by the community.

I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.

When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.

password4321•7mo ago
> why docker, which seems so much more beaurecratic/convoluted, prevails over podman

First mover advantage and ongoing VC-funded marketing/DevRel

djfivyvusn•7mo ago
Something that took me 20 years to learn: Never underestimate the value of a slick gui.
psviderski•7mo ago
Ah neat I didn't know that podman has 'image scp'. Thank you for sharing. Do you think it was more straightforward to implement this in podman because you can easily access its images and metadata as files on the file system without having to coordinate with any daemon?

Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.

For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.

nothrabannosir•7mo ago
What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

https://github.com/containers/skopeo

yibers•7mo ago
"skopeo" seems to related to managing registeries, very different from this.
NewJazz•7mo ago
Skopeo manages images, copies them and stuff.
jlcummings•7mo ago
Skopeo lets you work with remote registries and local images without a docker/podman/etc daemon.

We use to ‘clone’ across deployment environments and across providers outside of the build pipeline as an adhoc job.

s1mplicissimus•7mo ago
very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)
brirec•7mo ago
I was informed that Podman at least has a `podman image scp` function for doing just this...
someothherguyy•7mo ago
https://www.redhat.com/en/blog/podman-transfer-container-ima...
dzonga•7mo ago
this is nice, hopefully DHH and the folks working on Kamal adopt this.

the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command

rudasn•7mo ago
Build the image on the deployment server? Why not build somewhere else once and save time during deployments?

I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.

christiangenco•7mo ago
I think the idea with unregistry is that you're still building somewhere else once but then instead of pushing everything to a registry once you push your unique layers directly to each server you're deploying.
psviderski•7mo ago
I don't see a reason to not adopt this in Kamal. I'm also building Uncloud that took a lot of inspiration from Kamal, please check it out. I will integrate unregistry into uncloud soon to make the build/deploy process a breeze.
bradly•7mo ago
As a long ago fan of chef-solo, this is really cool.

Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?

psviderski•7mo ago
Yep, I'm familiar with Kamal and it actually inspired me to build Uncloud using similar principles but with more cluster-like capabilities.

I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.

christiangenco•7mo ago
I think it'd be a perfect fit. We'll see what happens: https://github.com/basecamp/kamal/issues/1588
nine_k•7mo ago
Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.
EricRiese•7mo ago
> The extra 's' is for 'sssh'

> What's that extra 's' for?

> That's a typo

causasui•7mo ago
https://www.youtube.com/watch?v=3m6Blqs0IgY
gchamonlive•7mo ago
It's fine, but it wouldn't hurt to have a more formal alias like `docker push-over-ssh`.

EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.

psviderski•7mo ago
That's a valid concern. You can very easily give it whatever name you like. Docker looks for `docker-COMAND` executables in ~/.docker/cli-plugins directory making COMMAND a `docker` subcommand.

Rename the file to whatever you like, e.g. to get `docker pushoverssh`:

  mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
Note that Docker doesn't allow dashes in plugin commands.
whalesalad•7mo ago
can easily see an engineer spotting pussh in a ci/cd workflow or something and thinking "this is a mistake" and changing it.
someothherguyy•7mo ago
and prone to collision!
nine_k•7mo ago
Indeed so! Because it's art, not engineering. The engineering approach would require a recognizably distinct command, eliminating the possibility of such a pun.
rollcat•7mo ago
I used to have an alias em=mg, because mg(1) is a small Emacs, so "em" seemed like a fun name for a command.

Until one day I made that typo.

bobbiechen•7mo ago
I'm a fan of installing sl(1), the terminal steam locomotive. I mistype it every couple months and it always gives me a laugh.

https://github.com/mtoyoda/sl

danillonunes•7mo ago
In the same spirit there's gti https://r-wos.org/hacks/gti
armx40•7mo ago
How about using docker context. I use that a lot and works nicely.
Snawoot•7mo ago
How do docker contexts help with the transfer of image between hosts?
dobremeno•7mo ago
I assume OP meant something like this, building the image on the remote host directly using a docker context (which is different from a build context)

  docker context create my-awesome-remote-context --docker "host=ssh://user@remote-host"

  docker --context my-awesome-remote-context build . -t my-image:latest
This way you end up with `my-image:latest` on the remote host too. It has the advantage of not transferring the entire image but only transferring the build context. It builds the actual image on the remote host.
revicon•7mo ago
This is exactly what I do, make a context pointing to the remote host, use docker compose build / up to launch it on the remote system.
lxe•7mo ago
Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.
nodesocket•7mo ago
A recommendation for Portainer if you haven't used or considered it. I'm running two EC2 instances on AWS using portainer community edition and portainer agent and works really well. The stack feature (which is just docker compose) is also super nice. One EC2 instance; running Portainer agent runs Caddy in a container which acts as the load balancer and reverse proxy.
lxe•7mo ago
I'm actually running portainer for my homelab setup hosting things like octoprint and omada controller etc.
vhodges•7mo ago
There is also https://skateco.github.io/ which (at quick glance) seems similar
byrnedo•7mo ago
Skate author here: please try it out! I haven’t gotten round to diving deep into uncloud yet, but I think maybe the two projects differ in that skate has no control plane; the cli is the control plane.

I built skate out of that exact desire to have a dokku like experience that was multi host and used a standard deployment configuration syntax ( k8s manifests ).

https://skateco.github.io/docs/getting-started/

benwaffle•7mo ago
Looks like uncloud has no control plane, just a CLI: https://github.com/psviderski/uncloud#-features
psviderski•7mo ago
I'm glad the idea of uncloud resonated with you. Feel free to join our Discord if you have questions or need help
actinium226•7mo ago
This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.

remram•7mo ago
Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?
selcuka•7mo ago
You mean ssh'ing into the remote server, then pulling image from local? That would require your local host to be accessible from the remote host, or setting up some kind of ssh tunneling.
mdaniel•7mo ago
`ssh -R` and `ssh -L` are amazing, and I just learned that -L and -R both support unix sockets on either end and also unix socket to tcp socket https://manpages.ubuntu.com/manpages/noble/man1/ssh.1.html#:...

I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal

matt_kantor•7mo ago
This is what docker-pushmi-pullyu[1] does, using `ssh -R` as suggested by a sibling comment.

[1]: https://github.com/mkantor/docker-pushmi-pullyu

remram•7mo ago
That's also what the submitted tool does, I want to do the same thing just in the reverse direction. I just don't want to start extra containers on the prod machine.
selcuka•7mo ago
No, the second one (docker-pushmi-pullyu) runs the registry on the build host.
remram•7mo ago
I meant to reply to you, whoops.

docker-pushmi-pullyu does an extra copy from build host to a registry, so it is just the standard workflow.

I think Spegel does what I want (= serve images from the local cache as a registry), I might be able to build from that. It is meant to be integrated with Kubernetes though, making a simple transfer tool probably requires some adaptation.

psviderski•7mo ago
The problem with running a registry locally is that Docker doesn't provide an API to get individual image layers to be able to build a registry API on top. You have to hook into the containerd Docker uses under the hood. You can't do this locally in many cases, for example, on macOS the VM running Docker Desktop doesn't expose the containerd socket. I guess the workaround you implemented in docker-pushmi-pullyu is an extra copy to the registry which is a bummer.
matt_kantor•7mo ago
Yeah, a few years ago I remember looking into whether I could expose image layers from the engine as a volume to mount directly into the registry, but at least at the time it seemed complex, and when I write tools like this simplicity is a primary goal.

As a mitigation docker-pushmi-pullyu caches pushed layers between runs[1]. More often than not I'm only changing upper layers of previously-pushed images, so this helps a lot. Also, since everything happens locally the push phase is typically quite fast even with cache misses (especially on an SSD), especially compared to the pull phase which is usually going over the internet (or another network).

[1]: https://github.com/mkantor/docker-pushmi-pullyu/pull/19/file...

psviderski•7mo ago
It starts an unregistry container on the remote side. I wonder, what's the use case on your mind for doing it the other way around?
remram•7mo ago
I guess I feel a little dirty running the container on the prod server. My machine has all the dev tools, and it is also where I install and run this pussh tool, so I would rather have the container run there too.
esafak•7mo ago
You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/
throwaway314155•7mo ago
I assume you can do these "image acrobatics" in any shell.
esafak•7mo ago
The dagger shell is built for devops, and can pipe first class dagger objects like services and containers to enable things like

  github.com/dagger/dagger/modules/wolfi@v0.16.2 |
  container |
  with-exec ls /etc/ |
  stdout
What's interesting here is that the first line demonstrates invocation of a remote module (building a Wolfi Linux container), of which there is an ecosystem: https://daggerverse.dev/
yjftsjthsd-h•7mo ago
What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?
psviderski•7mo ago
Yeah exactly, which is crucial for large images if you change only a few last layers.

The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.

This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.

pragmatick•7mo ago
Relatively early on the page it says:

"docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server"

metadat•7mo ago
This should have always been a thing! Brilliant.

Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.

password4321•7mo ago
As a VC-funded company Docker had to make money somehow.
dreis_sw•7mo ago
I recommend using GitHub's registry, ghcr.io, with GitHub Actions.

I invested just 20 minutes to setup a .yaml workflow that builds and pushes an image to my private registry on ghcr.io, and 5 minutes to allow my server to pull images from it.

It's a very practical setup.

ezekg•7mo ago
I think the complexity lies in the dance required to push blobs to the registry. I've built an OCI-compliant pull-only registry before and it wasn't that complicated.
scott113341•7mo ago
Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

[1]: https://zotregistry.dev

stroebs•7mo ago
Your SSL certificate for zothub.io has expired in case you weren’t aware.
isaacvando•7mo ago
Love it!
layoric•7mo ago
I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.
alisonatwork•7mo ago
This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.

0x457•7mo ago
It needs docker daemon on both ends. This is just a clever way to share layer between two daemons via ssh.
psviderski•7mo ago
You need a containerd on the remote end (Docker and Kubernetes use containerd) and anything that speaks registry API (OCI Distribution spec: https://github.com/opencontainers/distribution-spec) on the client. Unregistry reuses the official Docker registry code for the API layer so it looks and feels like https://hub.docker.com/_/registry

You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.

Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...

You can hack your own way pretty easily.

dirkc•7mo ago
I agree! For a bunch of services I manage I build the image locally, save it and then use ansible to upload the archive and restore the image. This usually takes a lot longer than I want it to!
modeless•7mo ago
It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!
TheRoque•7mo ago
You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.

Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`

And loading it looks like this: `docker load -i /path/to/my-app.tar`

Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.

nine_k•7mo ago
If you have an image with 100MB worth of bottom layers, and only change the tiny top layer, the unregistry will only send the top layer, while save / load would send the whole 100MB+.

Hence the value.

isoprophlex•7mo ago
yeah i deal with horrible, bloated python machine learnings shits; >1 GB images are nothing. this is excellent, and i never knew how much i needed this tool until now.
throwaway290•7mo ago
Docker also has export/load commands. They only exports the current layer filesystem.
authorfly•7mo ago
Good advice and beware the difference between docker export (which will fail if you lack enough storage, since it saves volumes) and docker save. Running the wrong command might knock out your only running docker server into an unrecoverable state...
francislavoie•7mo ago
If you read the README, you'll see that replacing the "save | upload | load" workflow is the whole point of this, to drastically reduce the amount of data to upload by only sending new layers instead of everything, and you can use this inside your ansible setup to speed it up.
fellatio•7mo ago
Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

Edit: that thing exists it is uncloud. Just found out!

That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.

psviderski•7mo ago
For sure, it's always a tradeoff and it's great to have options so you can choose the best tool for every job.
mountainriver•7mo ago
I’ve wanted unregistry for a long time, thanks so much for the awesome work!
psviderski•7mo ago
Met too, you're welcome! Please create an issue on github if you find any bugs
jokethrowaway•7mo ago
Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

Both approaches are inferior to yours because of the load on the server (one way or another).

Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

The size of my images are tiny, the extra complexity is unwarranted.

Then of course I'm not a 1000 people company with 1GB docker images.

quantadev•7mo ago
I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.
francislavoie•7mo ago
See the README, this results in only changed layers being sent instead of _everything_ which can save a lot of time.
quantadev•7mo ago
I'll do that. Thank you.
cultureulterior•7mo ago
This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.
psviderski•7mo ago
Not a torrent protocol but p2p, check out https://github.com/spegel-org/spegel it's super cool.

I took inspiration from spegel but built a more focused solution to make a registry out of a Docker/containerd daemon. A lot of other cool stuff and workflows can be built on top of it.

czhu12•7mo ago
Does this work with Kubernetes image pulls?
psviderski•7mo ago
I guess you're asking about the registry part (not 'pussh' command). It exposes the containerd image store as standard registry API so you can use any tools that work with regular registry to pull/push images to it.

You should be able to run unregistry as a standalone service on one of the nodes. Kubernetes uses containerd for storing images on nodes. So unregistry will expose the node's images as a registry. Then you should be able to run k8s deployments using 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on other nodes will be pulling the image from unregistry.

You may want to take a look at https://spegel.dev/ which works similarly but was created specifically for Kubernetes.

MotiBanana•7mo ago
I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!
psviderski•7mo ago
Wow ttl.sh is a really neat idea, thank you for sharing!
politelemon•7mo ago
Considering the nature of servers, security boundaries and hardening,

> Linux via Homebrew

Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.

djfivyvusn•7mo ago
Brew is such a cute little package manager. Updating its repo every time you install something. Randomly self updating like a virus.
v5v3•7mo ago
That made me laugh lol
yrro•7mo ago
Well put, but it's a shame this comment is the first thing I read, rather than comments about the tool itself!
cyberax•7mo ago
We're using it to distribute internal tools across macOS and Linux developers. It excels in this.

Are there any good alternatives?

lillecarl•7mo ago
100% Nix, it works on every distro, MacOS, WSL2 and won't pollute your system (it'll create /nix and patch your bashrc on installation and everything from there on goes into /nix).
cyberax•7mo ago
Downside: it's Nix.

I tried it, but I have not been able to easily replicate our Homebrew env. We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them. Compiling the binaries requires quite a few tools (C++, sigh).

I got stuck at the point where I needed to use a private repo in Nix.

lloeki•7mo ago
> We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them.

Perfectly doable with Nix. Ignore the purists and do the hackiest way that works. It's too bad that tutorials get lost on concepts (which are useful to know but a real turn down) instead of focusing on some hands-on practical how-to.

This should about do it and is really not that different nor difficult than formulas or brew install:

    git init mychannel
    cd mychannel
    
    cat > default.nix <<'NIX'
    {
      pkgs ? import <nixpkgs> { },
    }:
    
    {
      foo = pkgs.callPackage ./pkgs/foo { };
    }
    NIX
    
    mkdir -p pkgs/foo
    cat > pkgs/foo/default.nix <<'NIX'
    { pkgs, stdenv, lib }:
    
    stdenv.mkDerivation {
      pname = "foo";
      version = "1.0";
    
      # if you have something to fetch
      # src = fetchurl {
      #   url = http://example.org/foo-1.2.3.tar.bz2;
      #   # if you don't know the hash, put some lib.fakeSha256 there
      #   sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
      # };

      buildInputs = [
        # add any deps
      ];
    
      # this example just builds in place, so skip unpack
      unpackPhase = "true"; # no src attribute
    
      # optional if you just want to copy from your source above
      # build trivial example script in place
      buildPhase = ''
        cat > foo <<'SHELL'
        #!/bin/bash
        echo 'foo!'
        SHELL
        chmod +x foo
      '';
    
      # just copy whatever
      installPhase = ''
        mkdir -p $out/bin
        cp foo $out/bin
      '';
    }
    NIX

    nix-build -A foo -o out/foo  # you should have your build in '/out/foo'
    ./out/foo/bin/foo  # => foo!

    git add .
    git commit -a -m 'init channel'
    git add origin git@github.com:OWNER/mychannel
    git push origin main
    
    nix-channel --add https://github.com/OWNER/mychannel/archive/main.tar.gz mychannel
    nix-channel --update
    
    nix-env -iA mychannel.foo
    foo  # => foo!
(I just cobbled that up together, if it doesn't work as is it's damn close; flakes left as an exercise to the reader)

Note: if it's a private repo then in /etc/nix/netrc (or ~/.config/nix/netrc for single user installs):

    machine github.com
        password ghp_YOurToKEn
> Compiling the binaries requires quite a few tools (C++, sigh).

Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.

cyberax•7mo ago
Hm. That actually sounds doable (we do have hashes for integrity). I'll try that and see how it goes.

> Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.

It's tempting, and I tried that, but ran away crying. We're using Docker images instead for now.

We are also using direnv that transparently execs commands inside Docker containers, this works surprisingly well.

lloeki•7mo ago
Sure, whatever floats your boat!

I'm just sad that Nix is often dismissed as intractable, and I feel that's mostly because tutorials get too hung up on concept rabbit holing.

peyloride•7mo ago
This is awesome, thanks!
larsnystrom•7mo ago
Nice to only have to push the layers that changed. For me it's been enough to just do "docker save my-image | ssh host 'docker load'" but I don't push images very often so for me it's fine to push all layers every time.
iw7tdb2kqo9•7mo ago
I think it will be a good fit for me. Currently our 3GB docker image takes a lot of time to push to Github package registry from Github Action and pull from EC2.
bflesch•7mo ago
this is useful. thanks for sharing
amne•7mo ago
Takes a look at pipeline that builds image in gitlab, pushes to artifactory, triggers deployment that pulls from artifactory and pushes to AWS ECR, then updates deployment template in EKS which pulls from ECR to node and boots pod container.

I need this in my life.

maccard•7mo ago
My last projects pipeline spent more time pulling and pushing containers than it did actually building the app. All of that was dwarfed by the health check waiting period, when we knew in less than a second from startup if we were actually healthy or not.
forix•7mo ago
Out of curiosity why do you use both Artifactory and ECR? We're currently considering a switch from Artifactory to ECR for cost savings reasons.
victorbjorklund•7mo ago
Sweet. I been wanting this for long.
alibarber•7mo ago
This is timely for me!

I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.

I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.

Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.

Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.

rcarmo•7mo ago
I think this is great and have long wondered why it wasn’t an out of the box feature in Docker itself.
spwa4•7mo ago
THANK you. Can you do the same for kubernetes somehow?
tontony•7mo ago
A few thoughts/ideas on using this in Kubernetes are discussed in this issue: https://github.com/psviderski/unregistry/issues/4; generally, should be possible with the same idea, but with some tweaking.

Also have a look at https://spegel.dev/, it's basically a daemonset running in your k8s cluster that implements a (mirror) registry using locally cached images and peer-to-peer communication.

jdsleppy•7mo ago
I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.

kosolam•7mo ago
This approach is akin to the prod server pulling an image from a registry. The op method is push based.
jdsleppy•7mo ago
No, in my example the docker-compose.yml would exist alongside your application's source code and you can use the `build` directive https://docs.docker.com/reference/compose-file/services/#bui... to instruct the remote host (Hetzner VPS, or whatever else) to build the image. That image does not go to an external registry, but is used internal to that remote host.

For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.

But in this method you push the source code, not a finished docker image, to the server.

quantadev•7mo ago
Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.
jdsleppy•7mo ago
The source code does not get to the filesystem on the prod server. It is sent to the Docker daemon when it builds the image. After the build ends, there's only the image on the prod server.

I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.

quantadev•7mo ago
Yeah, I definitely didn't understand that! Thanks for explaining. I've bookmarked this thread, because there's several commands that look more powerful and clean than what I'm currently doing which is to "docker save" to TAR, copy the TAR up to prod and then "docker load".
hoppp•7mo ago
Oh this is great, its a problem I also have.
ajd555•7mo ago
This is great! I wonder how well it works in case of Disaster Recovery though. Perhaps it is not intended for production environments with strict SLAs and uptime requirements, but if you have 20 servers in a cluster that you're migrating to another region or even cloud provider, the pull approach from a registry seems like the safest and most scalable approach
Aaargh20318•7mo ago
I simply use "docker save <imagename>:<version> | ssh <remoteserver> docker load"
dboreham•7mo ago
I like the idea, but I'd want this functionality "unbundled".

Being able to run a registry server over the local containerd image store is great.

The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.

Very slick though.

psviderski•7mo ago
They're unbundled already. You can run unregistry as a standalone service and use your own way to push/pull from it: https://github.com/psviderski/unregistry?tab=readme-ov-file#...
matt_kantor•7mo ago
Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

[1]: https://github.com/mkantor/docker-pushmi-pullyu

[2]: https://hub.docker.com/_/registry

matt_kantor•7mo ago
After taking a closer look it seems the main conceptual difference between unregistry/docker-pussh and docker-pushmi-pullyu is that the former runs the temporary registry on the remote host, while the latter runs it locally. Although in both cases this is not something users should typically have to care about.
westurner•7mo ago
Do docker-pussh or docker-pushmi-pullyu verify container image signatures and attestations?

From "About Docker Content Trust (DCT)" https://docs.docker.com/engine/security/trust/ :

  > Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. 

  export DOCKER_CONTENT_TRUST=1
cosign > verifying containers > verify attestation: https://docs.sigstore.dev/cosign/verifying/verify/#verify-at...

/? difference between docker content trust dct and cosign: https://www.google.com/search?q=difference+between+docker+co...

matt_kantor•7mo ago
docker-pushmi-pullyu does a vanilla `docker pull`[1] on the remote side, so you should be able to set `DOCKER_CONTENT_TRUST` in the remote environment to get whatever behavior you want (though admittedly I have not tested this).

If there's desire for an option to specify `--disable-content-trust` during push and/or pull I'll happily add it. Please file an issue if this is something you want.

[1]: https://github.com/mkantor/docker-pushmi-pullyu/blob/12d2893...

westurner•7mo ago
Should it be set in both the local and remote envs?

What does it do if there's no signature?

Do images built and signed with podman and cosign work with docker; are the artifact signatures portable across container CLIs docker, nerdctl, and podman?

westurner•7mo ago
From nerdctl/docs/cosign.md "Container Image Sign and Verify with cosign tool" https://github.com/containerd/nerdctl/blob/main/docs/cosign.... ; handily answering my own question aloud:

Sign the container image while pushing, verify the signature on fetch/pull:

  # Sign the image with Keyless mode
  $ nerdctl push --sign=cosign devopps/hello-world
  
  # Sign the image and store the signature in the registry
  $ nerdctl push --sign=cosign --cosign-key cosign.key devopps/hello-world

  # Verify the image with Keyless mode
  $ nerdctl pull --verify=cosign --certificate-identity=name@example.com --certificate-oidc-issuer=https://accounts.example.com devopps/hello-world


  # You can not verify the image if it is not signed
  $ nerdctl pull --verify=cosign --cosign-key cosign.pub devopps/hello-world-bad
matt_kantor•7mo ago
> I'm curious why you implemented your own registry for this

Answering my own question: I think it's because you want to avoid the `docker pull` side of the equation (when possible) by having the registry's backing storage be the same as the engine's on the remote host.

psviderski•7mo ago
Exactly, although my main motivation was to reduce the distinction between docker engine and docker registry. To make it possible for a user to push/pull to the docker daemon as if it was a registry, hence a registry wrapper.

This is a prerequisite for what I want to build for uncloud, a clustering solution I’m developing. I want to make it possible to push an image to a cluster (store it right in the docker on one or multiple machines) and then run it on any machine in the cluster (pull from a machine that has the image if missing locally) eliminating a registry middleman.

matt_kantor•7mo ago
Very cool. As others have said: "push/pull to the docker daemon as if it was a registry" is how docker should always have worked.

This is next level but I can imagine distributing resource usage across the cluster by pulling different layers from different peers concurrently.

revicon•7mo ago
Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.

matt_kantor•7mo ago
Assuming I understand your workflow, one difference is that unregistry works with already-built images. They aren't built on the remote host, just pushed there. This means you can be confident that the image on your server is exactly the same as the one you tested locally, and also will typically be much faster (assuming well-structured Dockerfiles with small layers, etc).
pbh101•7mo ago
This is probably an anti-feature in most contexts.
akovaski•7mo ago
The ability to push a verified artifact is an anti-feature in most contexts? How so?
pbh101•7mo ago
It is fine if you are just working by yourself on non-prod things and you’re happy with that.

But if you are working with others on things that matter, then you’ll find you want your images to have been published from a central, documented location, where it is verified what tests they passed, the version of the CI pipeline, the environment itself, and what revision they were built on. And the image will be tagged with this information, and your coworkers and you will know exactly where to look to get this info when needed.

This is incompatible with pushing an image from your local dev environment.

matt_kantor•7mo ago
With that sort of setup you'd run `docker pussh` from your build server, not your local machine (really though you'd probably want a non-ephemeral registry, so wouldn't use unregistry at all).

Other than "it's convenient and my use case is low-stakes enough for me to not care", I can't think of any reason why one would want to build images on their production servers.

pbh101•7mo ago
Agreed.
tontony•7mo ago
Totally valid approach if that works for you, the docker context feature is indeed nice.

But if we're talking about hosts that run production-like workloads, using them to perform potentially cpu-/io-intensive build processes might be undesirable. A dedicated build host and context can help mitigate this, but then you again face the challenge of transferring the built images to the production machine, that's where the unregistry approach should help.

richardc323•7mo ago
I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

[1]: https://github.com/richardcrichardc/docker2docker

psviderski•7mo ago
You're the OG! Hats off, mate.

It's a bummer docker still doesn't have an API to explore image layers. I guess their plans to eventually transition to containerd image store as the default. Once we have containerd image store both locally and remotely we will finally be able to do what you've done without the registry wrapper.

cik•7mo ago
You're bang on, but you can do things with dive (https://github.com/wagoodman/dive) and use chunks of the code in other projects... That's what I've been doing. The license is MIT so it's permissive.

But yes, an API would be ideal. I've wasted far too much time on this.

shykes•7mo ago
Docker creator here. I love this. In my opinion the ideal design would have been:

1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.

psviderski•7mo ago
Hey Solomon, thank you for sharing your thoughts, love your work!

1. Yeah agreed, it's a bit of a mess that we have at least three different file system layouts for images and two image stores in the engine. I believe it's still not too late for Docker to achieve what you described without breaking the current model. Not sure if they care though, they're having hard times

2. Hm, push-to-cluster deployment sounds clever. I'm definitely thinking about a distributed image store, e.g. embedding unregistry in every node so that they can pull and share images between each other. But triggering a deployment on push is something I need to think through. Thanks for the idea!

sushidev•7mo ago
I've prepared a quick one using reverse port forwarding and a local temp registry. In case anyone finds it useful:

  #!/bin/bash
  set -euo pipefail
  
  IMAGE_NAME="my-app"
  IMAGE_TAG="latest"
  
  # A temporary Docker registry that runs on your local machine during deployment.
  LOCAL_REGISTRY="localhost:5000"
  REMOTE_IMAGE_NAME="${LOCAL_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
  REGISTRY_CONTAINER_NAME="temp-deploy-registry"
  
  # SSH connection details.
  # The jump host is an intermediary server. Remove `-J "${JUMP_HOST}"` if not needed.
  JUMP_HOST="user@jump-host.example.com"
  PROD_HOST="user@production-server.internal"
  PROD_PORT="22" # Standard SSH port
  
  # --- Script Logic ---
  
  # Cleanup function to remove the temporary registry container on exit.
  cleanup() {
      echo "Cleaning up temporary Docker registry container..."
      docker stop "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
      docker rm "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
      echo "Cleanup complete."
  }
  
  # Run cleanup on any script exit.
  trap cleanup EXIT
  
  # Start the temporary Docker registry.
  echo "Starting temporary Docker registry..."
  docker run -d -p 5000:5000 --name "${REGISTRY_CONTAINER_NAME}" registry:2
  sleep 3 # Give the registry a moment to start.
  
  # Step 1: Tag and push the image to the local registry.
  echo "Tagging and pushing image to local registry..."
  docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${REMOTE_IMAGE_NAME}"
  docker push "${REMOTE_IMAGE_NAME}"
  
  # Step 2: Connect to the production server and deploy.
  # The `-R` flag creates a reverse SSH tunnel, allowing the remote host
  # to connect back to `localhost:5000` on your machine.
  echo "Executing deployment command on production server..."
  ssh -J "${JUMP_HOST}" "${PROD_HOST}" -p "${PROD_PORT}" -R 5000:localhost:5000 \
    "docker pull ${REMOTE_IMAGE_NAME} && \
     docker tag ${REMOTE_IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG} && \
     systemctl restart ${IMAGE_NAME} && \
     docker system prune --force"
  
  echo "Deployment finished successfully."
matt_kantor•7mo ago
Your script is the same idea as https://github.com/mkantor/docker-pushmi-pullyu (which is in the public domain, so feel free to steal).
sebastos•7mo ago
Amazing. Our company has to push gigantic docker images to IoT style devices, and we’ve had to maintain an installer script that downloads and stands up a local dummy registry, pushes the image to that, then ssh’s to the remote and pulls from it. When the installer fails, this process is almost always the culprit. Your tool looks like it would be a huge improvement (- although the requirement that the host have the unregistry container is a _slight_ demerit). Nevertheless, I can’t wait to check this out.

I have spent an absolutely bewildering 7 years trying to understand why this huge gap in the docker ecosystem tooling exists. Even if I never use your tool, it’s such a relief to find someone else who sees the problem in clear terms. Even in this very thread you have people who cannot imagine “why you don’t just docker save | docker load”.

It’s also cathartic to see Solomon regretting how fucky the arbitrary distinction between registries and local engines is. I wish it had been easier to see that point discussed out in the open some time in the past 8 years.

It always felt to me as though the shape of the entire docker ecosystem was frozen incredibly fast. I was aware of docker becoming popular in 2017ish. By the time I actually stated to dive in, in 2018 or so, it felt like its design was already beyond question. If you were confused about holes in the story, you had to sift through cargo cult people incapable of conceiving that docker could work any differently than it already did. This created a pervasive gaslighty experience: Maybe I was just Holding It Wrong? Why is everyone else so unperturbed by these holes, I wondered. But it turns out, no, damnit - I was right!