frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I Cannot SSH into My Server Anymore (and That's Fine)

https://soap.coffee/~lthms/posts/i-cannot-ssh-into-my-server-anymore.html
64•TheWiggles•4d ago

Comments

lawrencegripper•2h ago
I’ve been down a similar journey with Fedora Core OS and have loved it.

The predictability and drop in toil is so nice.

https://blog.gripdev.xyz/2024/03/16/in-search-of-a-zero-toil...

stryan•2h ago
Quadlets are a real game changer for this type of small-to-medium scale declarative hosting. I've been pushing for them at work over ugly `docker compose in systemd units` service management and moved my home lab over to using them for everything. The latter is a similar setup to OP except with OpenSUSE MicroOS instead of Fedora CoreOS and I'm not so brave as to destroy and rebuild my VPS's whenever I make a change :) . On the other hand, MicroOS (and I'm assuming FCOS) reboots automatically to apply updates with rollback if needed so combined with podman auto-update you can basically just spin up a box, drop the files on, and let it take care of itself (at least until a container update requires manual intervention).

A few things in the article I think might help the author:

1. Podman 4 and newer (which FCOS should definitely have) uses netavark for networking. A lot of older tutorials and articles were written back when Podman used CNI for it's networking and didn't have DNS enabled unless you specifically installed it. I think the default `podman` network is still setup with DNS disabled by default. Either way, you don't have to use a pod if you don't want to anymore, you can just attach both containers to the same network and it should Just Work.

2. You can run the generator manually with "/usr/lib/systemd/system-generators/podman-system-generator --dry-run" to check Quadlet validity and output. Should be faster than daemon-reload'ing all the time or scanning the logs.

And as a bit of self-promotion: for anyone who wants to use Quadlets like this but doesn't want to rebuild their server whenever they make a change, I'm created a tool called Materia[0] that can install, remove, template, and update Quadlets and other files from a Git repository.

[0] https://github.com/stryan/materia

plagiarist•16m ago
Do you know if it is possible to run a quadlet as an ephemeral systemd-sysuser? That would solve all my current problems.
amluto•2h ago
> I’ve later learned that restarting a container that is part of a pod will have the (to me, unexpected) side-effect to restart all the other containers of that pod.

Anyone know why this is? Or, for that matter, why Kubernetes seems to work like this too?

I have an application for which the natural solution would be to create a pod and then, as needed, create and destroy containers within the pod. (Why? Because I have some network resources that don’t really virtualize, so they can live in one network namespace. No bridges.)

But despite containerd and Podman and Kubernetes kind-of-sort-of supporting this, they don’t seem to actually want to work this way. Why not?

stryan•1h ago
Yeah I was a little confused at this line; as far as I can tell you can restart containers that are a part of a Podman pod without restarting the whole pod just fine. I just verified this on one of my MicroOS boxes running Podman v5.7.1 .

Podman was changing pretty fast for a while so it could be an older version thing, though I'd assume FCOS is on Podman 5 by now.

gucci-on-fleek•1h ago
> Anyone know why this is?

In Podman, a pod is essentially just a single container; each "container" within a pod is just a separate rootfs. So from that perspective, it makes sense, since you can't really restart half of a container. (But I think that it might be possible to restart individual containers within a pod; but if any container within a pod fails, then I think that the whole pod will automatically restart)

> Why? Because I have some network resources that don’t really virtualize, so they can live in one network namespace.

You can run separate containers in the same network namespace with the "--network" option [0]. You can either start one container with its own automatic netns and then join the other containers to it with "--network=container:<name>", or you can manually create a new netns with "podman network create <name>" and then join all the containers to it with "--network=<name>".

[0]: https://docs.podman.io/en/latest/markdown/podman-run.1.html#...

amluto•59m ago
> You can run separate containers in the same network namespace with the "--network" option [0].

Oh, right, thanks. I think I did notice that last time I dug into this. But:

> or you can manually create a new netns with "podman network create <name>" and then join all the containers to it with "--network=<name>".

I don’t think this has the desired effect at all. And the docs for podman network connect don’t mention pods at all, which is odd. In general, I have not been very impressed by podman.

Incidentally, apptainer seems to have a more or less first class ability to join an existing netns, and it supports CNI. Maybe I should give it a try.

kace91•1h ago
>Anyone know why this is? Or, for that matter, why Kubernetes seems to work like this too?

Pods are specifically not wanted to be treated as vms, but as a single application/deployment units.

Among other things, if a container goes down you don’t know if it corrupted shared state (leaving sockets open or whatever). So you don’t know if the pod is healthy after restart. Also reviving it might not necessarily work, if the original startup process relied on some boot order. So to guarantee a return to healthy you need to restart the whole thing.

esseph•40m ago
The general idea is you want a single application per pod, unless you need a sidecar service to live in the same pod of each instance of your app.

You are normally running several instances of your frontend so that it can crash without impacting the user experience, or so it can get deployed to in a rolling manner, etc.

andrewmcwatters•1h ago
I concede that this is the state of the art in secure deployments, but I’m from a different age where people remoted into colocated hardware, or at least managed their VPSs without destroying them every update.

As a result, I think developers are forgetting filesystem cleanliness because if you end up destroying an entire instance, well it’s clean isn’t it?

It also results in people not knowing how to do basic sysadmin work, because everything becomes devops.

The bigger problem I have with this, is the logical conclusion is to use “distroless” operating system images with vmlinuz, an init, and the minimal set of binaries and filesystem structure you need for your specific deployment, and rarely do I see anyone actually doing this.

Instead, people are using a hodgepodge of containers with significant management overhead, that actually just sit on like Ubuntu or something. Maybe alpine. Or whatever Amazon distribution is used on ec2 now. Or of course, like in this article, Fedora CoreOS.

One day, I will work with people who have a network issue and don’t know how to look up ports in use. Maybe that’s already the case, and I don’t know it.

irishcoffee•34m ago
> The bigger problem I have with this, is the logical conclusion is to use “distroless” operating system images with vmlinuz, an init, and the minimal set of binaries and filesystem structure you need for your specific deployment, and rarely do I see anyone actually doing this.

In the few jobs I’ve had over 20 years, this is common in the embedded space, usually using yocto. Really powerful, really obnoxious tool chain.

bitwize•32m ago
What you describe is from the "pets" era of server deployment, and we are now deep into the "cattle" era. Train yourself on destroying and redeploying, and building observability into the stack from the outset, rather than managing a server through ssh. Every shop you go to professionally is going to work like this. Eventually, Linux desktops will work like this also, especially with all the work going into systemd to support movable home directories, immutable OS images with modular updates, and so forth.
crawshaw•1h ago
The idea that an "observability stack" is going to replace shell access on a server does not resonate with me at all. The metrics I monitor with prometheus and grafana are useful, vital even, but they are always fighting the last war. What I need are tools for when the unknown happens.

The tool that manages all my tools is the shell. It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation. Take it away and you are left with a server that is resilient against things you have seen before but lacks the tools to deal with the future.

gear54rus•1h ago
Agreed, this sounds like some complicated ass-backwards way to do what k8s already does. If it's too big for you, just use k3s or k0s and you will still benefit from the absolutely massive ecosystem.

But instead we go with multiple moving parts all configured independently? CoreOS, Terraform and a dependence on Vultr thing. Lol.

Never in a million years I would think it's a good idea to disable SSH access. Like why? Keys and non-standard port already bring China login attempts to like 0 a year.

ValdikSS•49m ago
>What I need are tools for when the unknown happens.

There are tools which show what happens per process/thread and inside the kernel. Profiling and tracing.

Check Yandex's Perforator, Google Perfetto. Netflix also has one, forgot the name.

reactordev•42m ago
Or… you build a container, that runs exactly what you specify. You print your logs, traces, metrics home so you can capture those stack traces and error messages so you can fix it and make another container to deploy.

You’ll never attach a debugger in production. Not going to happen. Shell into what? Your container died when it errored out and was restarted as a fresh state. Any “Sherlock Holmes” work would be met with a clean room. We have 10,000 nodes in the cluster - which one are you going to ssh into to find your container to attach a shell to it to somehow attach a debugger?

toast0•25m ago
> We have 10,000 nodes in the cluster - which one are you going to ssh into to find your container to attach a shell to it to somehow attach a debugger?

You would connect to any of the nodes having the problem.

I've worked both ways; IMHO, it's a lot faster to get to understanding in systems where you can inspect and change the system as it runs than in systems where you have to iterate through adding logs and trying to reproduce somewhere else where you can use interactive tools.

My work environment changed from an Erlang system where you can inspect and change almost everything at runtime to a Rust system in containers where I can't change anything and can hardly inspect the system. It's so much harder.

ValdikSS•41m ago
>It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation.

It is, SSH is indeed the tool for that, but that's because until recently we did not have better tools and interfaces.

Once you try newer tools, you don't want to go back.

Here's the example of my fairly recent debug session:

    - Network is really slow on the home server, no idea why
    - Try to just reboot it, no changes
    - Run kernel perf, check the flame graph
    - Kernel spends A LOT of time in nf_* (netfilter functions, iptables)
    - Check iptables rules
    - sshguard has banned 13000 IP addresses in its table
    - Each network packet travels through all the rules
    - Fix: clean the rules/skip the table for established connections/add timeouts
You don't need debugging facilities for many issues. You need observability and tracing.

Instead of debugging the issue for tens of minutes at least, I just used observability tool which showed me the path in 2 minutes.

crawshaw•8m ago
How did you use tracing to check the current state of a machine’s iptables rules?
gucci-on-fleek•1h ago
Fedora IoT [0] is a nice intermediate solution. Despite its name, it's really good for servers, since it's essentially just the Fedora Atomic Desktops (Silverblue/Kinoite) without any of the desktop stuff. It gets you atomic updates, a container-centric workflow, and easy rollbacks; but it's otherwise a regular server, so you can install RPMs, ssh into it, create user accounts, and similar. This is what I do for my personal server, and I'm really happy with it.

[0]: https://fedoraproject.org/iot/

dorfsmay•43m ago
Perfect timing for me, I've just been spending my side-project time in the last few weeks on building the smallest possible VMs with different glibc distros exactly for this, running podman containers, and comparing results.
starttoaster•35m ago
So it's AWS Fargate with a different name? That's cool for cloud hosted stuff. But if you're on prem, or manage your own VPS' then you need SSH access.
yigalirani•32m ago
real programmers can ssh to their servers
libHacker•20m ago
It's true. There's no reason to disable ssh. If you need it, it's there. If not, just don't use it.

The struggle of resizing windows on macOS Tahoe

https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/
730•happosai•4h ago•330 comments

CLI agents like Claude Code make self-hosting on a home server easier and fun

https://fulghum.io/self-hosting
229•websku•3h ago•148 comments

This game is a single 13 KiB file that runs on Windows, Linux and in the Browser

https://iczelia.net/posts/snake-polyglot/
68•snoofydude•2h ago•23 comments

iCloud Photos Downloader

https://github.com/icloud-photos-downloader/icloud_photos_downloader
289•reconnecting•5h ago•148 comments

I Cannot SSH into My Server Anymore (and That's Fine)

https://soap.coffee/~lthms/posts/i-cannot-ssh-into-my-server-anymore.html
64•TheWiggles•4d ago•24 comments

FUSE is All You Need – Giving agents access to anything via filesystems

https://jakobemmerling.de/posts/fuse-is-all-you-need/
58•jakobem•3h ago•19 comments

Sampling at negative temperature

https://cavendishlabs.org/blog/negative-temperature/
107•ag8•5h ago•38 comments

I'm making a game engine based on dynamic signed distance fields (SDFs) [video]

https://www.youtube.com/watch?v=il-TXbn5iMA
161•imagiro•3d ago•21 comments

I'd tell you a UDP joke…

https://www.codepuns.com/post/805294580859879424/i-would-tell-you-a-udp-joke-but-you-might-not-get
75•redmattred•2h ago•24 comments

Don't fall into the anti-AI hype

https://antirez.com/news/158
551•todsacerdoti•14h ago•729 comments

Elo – A data expression language which compiles to JavaScript, Ruby, and SQL

https://elo-lang.org/
41•ravenical•4d ago•5 comments

The Next Two Years of Software Engineering

https://addyosmani.com/blog/next-two-years/
45•napolux•3h ago•17 comments

Gentoo Linux 2025 Review

https://www.gentoo.org/news/2026/01/05/new-year.html
291•akhuettel•13h ago•147 comments

Insights into Claude Opus 4.5 from Pokémon

https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-into-claude-opus-4-5-from-pokemon
24•surprisetalk•5d ago•5 comments

A set of Idiomatic prod-grade katas for experienced devs transitioning to Go

https://github.com/MedUnes/go-kata
101•medunes•4d ago•13 comments

Perfectly Replicating Coca Cola [video]

https://www.youtube.com/watch?v=TDkH3EbWTYc
128•HansVanEijsden•3d ago•68 comments

A 2026 look at three bio-ML opinions I had in 2024

https://www.owlposting.com/p/a-2026-look-at-three-bio-ml-opinions
17•abhishaike•3h ago•1 comments

Ask HN: What are you working on? (January 2026)

139•david927•8h ago•463 comments

Rare Iron Age war trumpet and boar standard found

https://www.bbc.com/news/articles/cr7jvj8d39eo
7•breve•4d ago•0 comments

BYD's cheapest electric cars to have Lidar self-driving tech

https://thedriven.io/2026/01/11/byds-cheapest-electric-cars-to-have-lidar-self-driving-tech/
109•senti_sentient•4h ago•121 comments

Poison Fountain

https://rnsaffn.com/poison3/
161•atomic128•7h ago•104 comments

Show HN: What if AI agents had Zodiac personalities?

https://github.com/baturyilmaz/what-if-ai-agents-had-zodiac-personalities
7•arbayi•1h ago•1 comments

Anthropic: Developing a Claude Code competitor using Claude Code is banned

https://twitter.com/SIGKITTEN/status/2009697031422652461
226•behnamoh•5h ago•137 comments

Quake 1 Single-Player Map Design Theories (2001)

https://www.quaddicted.com/webarchive//teamshambler.planetquake.gamespy.com/theories1.html
40•Lammy•19h ago•2 comments

"Food JPEGs" in Super Smash Bros. & Kirby Air Riders

https://sethmlarson.dev/food-jpegs-in-super-smash-bros-and-kirby-air-riders
254•SethMLarson•5d ago•64 comments

"Scholars Will Call It Nonsense": The Structure of von Däniken's Argument (1987)

https://www.penn.museum/sites/expedition/scholars-will-call-it-nonsense/
50•Kaibeezy•5h ago•6 comments

I dumped Windows 11 for Linux, and you should too

https://www.notebookcheck.net/I-dumped-Windows-11-for-Linux-and-you-should-too.1190961.0.html
723•smurda•13h ago•685 comments

Meta announces nuclear energy projects

https://about.fb.com/news/2026/01/meta-nuclear-energy-projects-power-american-ai-leadership/
241•ChrisArchitect•6h ago•247 comments

C++ std::move doesn't move anything: A deep dive into Value Categories

https://0xghost.dev/blog/std-move-deep-dive/
226•signa11•2d ago•183 comments

iMessage-kit is an iMessage SDK for macOS

https://github.com/photon-hq/imessage-kit
21•rsync•3h ago•5 comments