frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The struggle of resizing windows on macOS Tahoe

https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/
689•happosai•3h ago•318 comments

2026 is the year of self-hosting

https://fulghum.io/self-hosting
216•websku•3h ago•133 comments

This game is a single 13 KiB file that runs on Windows, Linux and in the Browser

https://iczelia.net/posts/snake-polyglot/
64•snoofydude•2h ago•23 comments

iCloud Photos Downloader

https://github.com/icloud-photos-downloader/icloud_photos_downloader
280•reconnecting•5h ago•144 comments

I Cannot SSH into My Server Anymore (and That's Fine)

https://soap.coffee/~lthms/posts/i-cannot-ssh-into-my-server-anymore.html
60•TheWiggles•4d ago•23 comments

FUSE is All You Need – Giving agents access to anything via filesystems

https://jakobemmerling.de/posts/fuse-is-all-you-need/
55•jakobem•3h ago•18 comments

Sampling at negative temperature

https://cavendishlabs.org/blog/negative-temperature/
105•ag8•4h ago•38 comments

I'm making a game engine based on dynamic signed distance fields (SDFs) [video]

https://www.youtube.com/watch?v=il-TXbn5iMA
160•imagiro•3d ago•21 comments

I'd tell you a UDP joke…

https://www.codepuns.com/post/805294580859879424/i-would-tell-you-a-udp-joke-but-you-might-not-get
70•redmattred•2h ago•23 comments

Don't fall into the anti-AI hype

https://antirez.com/news/158
538•todsacerdoti•14h ago•719 comments

Elo – A data expression language which compiles to JavaScript, Ruby, and SQL

https://elo-lang.org/
39•ravenical•4d ago•4 comments

Gentoo Linux 2025 Review

https://www.gentoo.org/news/2026/01/05/new-year.html
290•akhuettel•13h ago•143 comments

The Next Two Years of Software Engineering

https://addyosmani.com/blog/next-two-years/
40•napolux•2h ago•16 comments

A set of Idiomatic prod-grade katas for experienced devs transitioning to Go

https://github.com/MedUnes/go-kata
99•medunes•4d ago•13 comments

Insights into Claude Opus 4.5 from Pokémon

https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-into-claude-opus-4-5-from-pokemon
22•surprisetalk•5d ago•4 comments

A 2026 look at three bio-ML opinions I had in 2024

https://www.owlposting.com/p/a-2026-look-at-three-bio-ml-opinions
17•abhishaike•3h ago•1 comments

Perfectly Replicating Coca Cola [video]

https://www.youtube.com/watch?v=TDkH3EbWTYc
126•HansVanEijsden•3d ago•66 comments

Ask HN: What are you working on? (January 2026)

136•david927•8h ago•457 comments

Show HN: What if AI agents had Zodiac personalities?

https://github.com/baturyilmaz/what-if-ai-agents-had-zodiac-personalities
6•arbayi•57m ago•1 comments

BYD's cheapest electric cars to have Lidar self-driving tech

https://thedriven.io/2026/01/11/byds-cheapest-electric-cars-to-have-lidar-self-driving-tech/
103•senti_sentient•3h ago•110 comments

Anthropic: Developing a Claude Code competitor using Claude Code is banned

https://twitter.com/SIGKITTEN/status/2009697031422652461
221•behnamoh•5h ago•136 comments

Quake 1 Single-Player Map Design Theories (2001)

https://www.quaddicted.com/webarchive//teamshambler.planetquake.gamespy.com/theories1.html
37•Lammy•19h ago•1 comments

Rare Iron Age war trumpet and boar standard found

https://www.bbc.com/news/articles/cr7jvj8d39eo
6•breve•4d ago•0 comments

"Food JPEGs" in Super Smash Bros. & Kirby Air Riders

https://sethmlarson.dev/food-jpegs-in-super-smash-bros-and-kirby-air-riders
254•SethMLarson•5d ago•64 comments

Poison Fountain

https://rnsaffn.com/poison3/
157•atomic128•7h ago•103 comments

"Scholars Will Call It Nonsense": The Structure of von Däniken's Argument (1987)

https://www.penn.museum/sites/expedition/scholars-will-call-it-nonsense/
50•Kaibeezy•5h ago•6 comments

I dumped Windows 11 for Linux, and you should too

https://www.notebookcheck.net/I-dumped-Windows-11-for-Linux-and-you-should-too.1190961.0.html
717•smurda•13h ago•682 comments

iMessage-kit is an iMessage SDK for macOS

https://github.com/photon-hq/imessage-kit
19•rsync•2h ago•5 comments

C++ std::move doesn't move anything: A deep dive into Value Categories

https://0xghost.dev/blog/std-move-deep-dive/
225•signa11•2d ago•181 comments

Show HN: Engineering Schizophrenia: Trusting Yourself Through Byzantine Faults

29•rescrv•2h ago•6 comments
Open in hackernews

2026 is the year of self-hosting

https://fulghum.io/self-hosting
209•websku•3h ago

Comments

simonw•2h ago
This posts lists inexpensive home servers, Tailscale and Claude Code as the big unlocks.

I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.

The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.

Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.

philips•2h ago
I agree! Before Tailscale I was completely skeptical of self hosting.

Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!

ryandrake•2h ago
Maybe I'm dumb, but I still don't quite understand the value-add of Tailscale over what Wireguard or some other VPN already provides. HN has tried to explain it to me but it just seems like sugar on top of a plain old VPN. Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.
Skunkleton•2h ago
Yes, that is really all it is.
mfcl•2h ago
It's plug and play.
Forgeties79•1h ago
And some people may not value that but a lot of people do. It’s part of why Plex has become so popular and fewer people know about Jellyfin. One is turnkey, the other isn’t.

I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.

Jtsummers•2h ago
I think you answered the question. Sugar. It's easier than managing your own Wireguard connections. Adding a device just means logging into the Tailscale client, no need to distribute information to or from other devices. Get a new phone while traveling because yours was stolen? You can set up Tailscale and be back on your private network in a couple minutes.

Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.

atmosx•2h ago
You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.

All these are manageable through other tools, but it’s more complicated stack to keep up.

Frotag•2h ago
I always assumed it was because a lot of ISPs use CGNAT and using tailscale servers for hole punching is (slightly) easier than renting and configuring a VPS.
Cyph0n•2h ago
It’s a bit more than sugar.

1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.

2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.

3. NAT is handled completely transparently.

4. SSO and other niceties.

For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.

drnick1•2h ago
> Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.

Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.

ryandrake•1h ago
Yea, my philosophy for self-hosting is "use the smallest amount of software you can in order to do what you really need." So for me, sugar X on top of fundamental functionality Y is always rejected in favor of just configuring Y."
simonw•2h ago
If you're confident that you know how to securely configure and use Wireguard across multiple devices then great, you probably don't need Tailscale for a home lab.

Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.

The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.

zeroxfe•1h ago
> Plex is just sugar on top of file sharing.

right, like browsers are just sugar on top of curl

edoceo•1h ago
curl is just sugar on sockets ;)
SchemaLoad•1h ago
Tailscale is Wireguard but it automatically sets everything up for you, handles DDNS, can punch through NAT and CGNAT, etc. It's also running a Wireguard server on every device so rather than having a hub server in the LAN, it directly connects to every device. Particularly helpful if it's not just one LAN you are trying to connect to, but you have lots of devices in different areas.
dangoodmanUT•2h ago
definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)
dangoodmanUT•2h ago
i'm still waiting for ECC minipcs, then i'll go all in on local DBs too
drnick1•2h ago
I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.

CSSer•2h ago
The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.

In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.

sauercrowd•2h ago
People are not full time maintainers of their infra though, that's very different to companies.

In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.

buildfocus•1h ago
Wireguard is _really_ simple in that sense though. If you're not doing anything complicated it's very easy to set up & maintain, and basically just works.

You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.

Topgamer7•2h ago
I don't have a static IP, so tailscale is convenient. And less likely to fail when I really need it, as apposed to trying to deal with dynamic dns.
heavyset_go•2h ago
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

This is what I do. You can do Tailscale like access using things like Pangolin[0].

You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.

> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.

This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].

[0] https://github.com/fosrl/pangolin

[1] https://news.ycombinator.com/item?id=46136026

edoceo•1h ago
Is a container not enough isolation? I do SSH to the host (alt-port) and then services in containers (mail, http)
heavyset_go•55m ago
Depends on your risk tolerance.

I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.

I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.

esseph•1h ago
With ports you have dozens or hundreds of applications and systems to attack.

With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.

With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.

SchemaLoad•1h ago
If you expose ports, literally everything you are hosting and every plugin is an attack surface. Most of this stuff is built by single hobbiest devs on the weekend. You are also exposed to any security issues you make in your configuration. My first attempt self hosting I had redis compromised because I didn't realise I had exposed it to the internet with no password.

Behind a VPN your only attack surface is the VPN which is generally very well secured.

sva_•1h ago
You exposed your redis publicly? Why?

Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.

SchemaLoad•49m ago
I actually didn't know I had. At the time I didn't properly know how docker networking worked and I exposed redis to the host so my other containers could access it. And then since this was on a VPS with a dedicated IP, this made it exposed to the whole internet.

I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.

drnick1•14m ago
> but there are still a million other pitfalls to fall in to if you are not a full time system admin.

Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.

The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.

buran77•59m ago
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

> I am not sure why people are so afraid of exposing ports

It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

> It's the way the Internet is meant to work.

Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.

lmm•26m ago
> It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.

zamadatix•56m ago
It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.

SoftTalker•49m ago
I just run an SSH server and forward local ports through that as needed. Simple (at least to me).
drnick1•32m ago
> There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

This incident precisely shows that containerization worked as intended and protected the host.

Etheryte•51m ago
Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.
drnick1•39m ago
> Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there

Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.

Frotag•41m ago
Speaking of Wireguard, my current topology has all peers talking to a single peer that forwards traffic between peers (for hole punching / peers with dynamic ips).

But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?

wooptoo•7m ago
Two separate WG profiles on the phone; one acting as a Proxy (which forwards everything), and one acting just as a regular VPN without forwarding.
comrade1234•2h ago
I just have a vpn server on my fiber modem/router (edgerouter-4) and use vpn clients on my devices. I actually have two vpn networks - one that can see the rest of my home network (and server) and the other that is completely isolated and can't see anything else and only does routing. No need to use a third-party and I have more flexibility
PaulKeeble•1h ago
Its especially important in the CGNAT world that has been created and the enormous slog that IPv6 rollout has ultimately become.
shadowgovt•1h ago
Besides the company that operates it, what is the big difference between Tailscale and Cloudflare tunnels? I've seen Tailscale mentioned frequently but I'm not quite sure what it gets for me. If it's more like a VPN, is it possible to use on an arbitrary device like a library kiosk?
ssl-3•46m ago
I don't use Cloudflare tunnels for anything.

But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.

Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.

But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.

SchemaLoad•1h ago
Yeah same story for me. I did not trust my sensitive data on random self hosting apps with no real security team. But now I can put the entire server on the local network only and split tunnel VPN from my devices and it just works.

LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.

Melatonic•27m ago
Why not cloudflare tunnels ?
cmiles8•2h ago
Anyone seriously about tech should have a homelab. It’s a small capital investment that lasts for years and with proxmox or similar having your own personal “private cloud” on demand is simple.
e2e4•2h ago
My stack. Claude code working via CLIs: Coolify on hetzner
Humorist2290•2h ago
Fun. I don't agree that Claude Code is the real unlock, but mostly because I'm comfortable with doing this myself. That said, the spirit of the article is spot on. The accessibility to run _good_ web services has never been better. If you have a modest budget and an interest, that's enough -- the skill gap is closing. That's good news I think.

But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.

I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.

heavyset_go•2h ago
I believe Vaultwarden keeps data encrypted at rest with your master key, so some of the problems inherent to hosting such data can be mitigated.
Humorist2290•2h ago
I can believe this, and it's a good point. I believe Bitwarden does the same. I'm not against Vaultwarden in particular but against colocation of highly sensitive (especially orthogonally sensitive) data in general. It's part of a self-hoster's journey I think: backups, isolation, security, redundancy, energy optimization, etc. are all topics which can easily occupy your free time. When your partner asks whether your photos are more secure in Immich than Google, it can lead to an interesting discussion of nuances.

That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.

Gualdrapo•2h ago
One day when I have some extra bucks I'd try to get a home server running, but the idea of having something eating grid electricity 24/7 doesn't seem to play along well with this 3rd world budget. Are there some foolproof and not so costly off-grid/solar setups to look at (like a Raspberry-based thingy or similar)?
noname120•1h ago
Mac Mini (M1 and later) under Asahi Linux just uses 5 W for a normal workload. If you push it to 100% of CPU it reaches 20 W. That’s very little.
SchemaLoad•54m ago
Only thing is you can't run Proxmox which makes self hosting much better, and you'll be limited to ARM builds, which on server is at least a lot easier than trying to run desktop apps. Modern micro desktops are also fairly power efficient, perhaps not quite as low as the mac, but much lower than a regular gaming desktop idling.

Avoid stacking in too many hard drives since each one uses almost as much power as the desktop does at idle.

atahanacar•39m ago
I doubt anyone who is too tight on cash that they have to think about the electricity cost of a home server can afford a Mac.
imiric•53m ago
Your fridge and other home appliances likely use much more power than whatever a small server would. The mini PC in the article is very power efficient. You likely won't notice it in your power bill, regardless of your budget. You could go with a solar-powered setup if you prefer, but IMO for this type of use case it would be overengineering.
efilife•2h ago
how many times will I get clickbaited by some cool title only to see AI praise in the article and nothing more? It's tiring and happens way too often

related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/

Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well

keybored•2h ago
Everything is now not-niche but on the cusp of hitting the mainstream. Like Formal Methods.[1] But they were nice enough to put it in the title. Then tptacek replied that he “called it a little bit” because of: Did Semgrep Just Get A Lot More Interesting?[2] (Why? What could the reason be?)

[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...

[2]: https://fly.io/blog/semgrep-but-for-real-now/

jacobthesnakob•36m ago
Maybe because I don’t do SWE for my job, but I have fun writing docker-compose files, troubleshooting them, and adding containers to my server. Then I understand how/why stuff works if it breaks, why would I want to hand that over to an AI?

Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”

chasing0entropy•27m ago
ROFL. There have been at least two posts of Claude without confirmation deleting a repository and one where it wiped an entire partition
sprainedankles•2h ago
Impeccable timing, I finally got around to putting some old hardware to use and getting a home assistant instance (and jellyfin, and immich, and nextcloud, ...) set up over winter break. Claude (and tailscale) saved hours of my time and enabled me to build enough momentum to get things configured. It's now feasible for me to spend 15-20 minutes knocking down homeserver tasks that I otherwise would've ignored. Quite fun!
hinkley•2h ago
What I’d really like is to run the admin interface for an app on a self hosted system behind firewalls, and push read replicas out into the cloud. But I haven’t seen a database where the master pushes data to the replicas instead of the replicas contacting the master. Which creates some pretty substantial tunneling problems that I don’t really want on my home network.

Is there a replica implementation that works in the direction I want?

chasing0entropy•32m ago
Use NAT hole punching if you're advanced, or you could fall back to IP/port filtering
bakies•23m ago
Tailscale will take care of the networking if you install it in both locations.
reachableceo•2h ago
Cloudron makes this even easier. Well worth 1.00 a day! Handles the entire stack (backups , monitoring , dns , ssl , updates ).
sciences44•2h ago
Interesting subject, thank you! I have a cluster of 2 Orange Pis (16 GB RAM each) plus a Raspberry Pi. I think it's high time to get them back on my desk. I never had time to get very far with the setup due to a lack of time. It took so long to write the Ansible scripts/playbooks, but with Claude Code, it's worth a try now. So thanks for the article; it makes me want to dust it off!
atmosx•2h ago
Just make sure you have a local and remote backup server.

From to time, test the restore process.

cafebeen•2h ago
This is great and echoes my experience. Although I would add a caveat that this mostly applies to solo work. Once you need to collaborate or operate on a team, many of limits of self-hosting return.
holyknight•2h ago
not with these hardware prices...
SchemaLoad•52m ago
Second hand micro desktops are still cheap, at least for now.
jackschultz•2h ago
I literally did this yesterday and had the same thought. Older computer (8 gigs ram) with crappy windows I never used and I thought huh, I wonder how good these models can take me through installing linux with goal of docker deploys of relatively basic things like cron tasks, personal postgres, and minio that I can used for self shared data.

Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.

Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.

I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.

notesinthefield•2h ago
I find myself a bit overwhelmed with hardware options during recent explorations. Seemingly everything can handle what I want a local copy of my Bandcamp archive to stream via jellyfin. Good times we’re in but even having good sysadmin skills, I wish someone would just tell me exactly what to buy.
devonhk•1h ago
> I wish someone would just tell me exactly what to buy.

I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.

I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.

SchemaLoad•57m ago
I spent so long trying to make Raspberry Pis work but they just kind of suck and everything is harder on them. I only just discovered that there are an infinite supply of these micro desktops second hand from offices/government. I was able to pick up a 9th gen intel with 16gb ram for less than the cost of a Pi 5, and it's massively more powerful.
jacobthesnakob•44m ago
Pi’s are incredible little basic home servers but they can’t handle transcoding. Great option for places with very expensive electricity too.
SchemaLoad•42m ago
I just found their proprietary hardware and being ARM too limiting. I wanted to set up full disk encryption to set up nextcloud on, and found that on the pi this is an incredibly complex process. While on an x86 PC it's just a checkbox on install.

And then you can only use distros which have a raspberry pi specific build. Generic ARM ones won't work.

jacobthesnakob•28m ago
Yeah the complaints are fair. I stick to RPi OS for maximum compatibility. People have been crying for a Google Drive client for Linux for over a decade, but still have to set it up in rclone.

I build out my server in Docker and I’ve been surprised that every image I’ve ever wanted to download has an ARM image.

devonhk•23m ago
Yeah, they’re amazing value. I paid $125 CAD for a 4th gen i7 with 16GB of RAM about 5 years ago. It’s been running almost 24/7 ever since with no issues.
notesinthefield•32m ago
I forgot all about these after I stopped doing desktop support, thanks!
bicepjai•2h ago
I feel the same way. I now have around 7 projects hosted on a home server with Coolify + Cloudflare. Always worry about security and I have seen many posts related to self hosting on HN trending recently
SchemaLoad•54m ago
For security just don't expose the server to the internet. Either set up wireguard or tailscale. You can set it up in a split tunnel config so your phone only uses the VPN for LAN requests.
easterncalculus•2h ago
Nice. This is a great start. The next steps are backups and regular security updates. The former is probably pretty easy with Claude and a provider like Backblaze, for updates I wonder if "check for security issues with my software and update anything in need" will work well (and most importantly, how consistently). Alternatively, getting the AI to threat model and perform any docker hardening measures.

Then someday we self-host the AI itself, and it all comes together.

zrail•1h ago
My security update system is straightforward but it took quite a lot of thought to get here.

My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.

I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.

Thus, Renovate keeps me up to date and git keeps everyone honest.

StrLght•2h ago
> Your home server's new sysadmin: Claude Code

(In)famous last words?

comrade1234•2h ago
Prices are going to have an effect here. I have a 76TB backup drive of 8 drives. A few months ago one of my 10TB drives failed and I replaced it with a 12 TB WD gold for 269CHF. I was thinking of building a new backup drive (for fun) and so I priced the same drive and now it's 409CHF.

It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.

benzguo•2h ago
Great post! Totally agree – agents like Claude Code make self-hosting a lot more realistic and low maintenance for the average dev.

We've gone a step further, and made this even easier with https://zo.computer

You get a server, and a lot of useful built-in functionality (like the ability to text with your server)

danpalmer•2h ago
There's something ironic about using Claud Code – a closed source service, that you can't self-host the hardware for, and that you can't get access to the data for – to self-host so that you can reduce your dependencies on things.
SchemaLoad•1h ago
Before you had to rely on blog posts and reddit for information, something you also couldn't self host. And if you are just asking it questions and taking actions yourself, you are learning how it works to do it yourself next time.
danpalmer•44m ago
Or you could read man pages, ask people for help, read books... all of which are more closely aligned with self-hosting than outsourcing the whole process.

I agree you could use LLMs to learn how it works, but given that they explain and do the actions, I suspect the vast majority aren't learning anything. I've helped students who are learning to code, and very often they just copy/paste back and forth and ignore the actual content.

SchemaLoad•39m ago
Sure, you could. But this isn't my job, it isn't my career. I just want Nextcloud running on a machine at home. I know linux and docker well enough to validate the ideas coming out of Gemini, and it helps me find stuff much faster than if I had to read man pages or read books.

And I find the stuff that the average self hoster needs is so surface level that LLMs flawlessly provide solutions.

danpalmer•17m ago
My push back isn't really on the possibility, it's on the irony. Self hosting is for many an ideological act that's about reducing dependencies on big tech, removing surveillance, etc. LLMs are essentially the antithesis of this.

If you're self hosting for other reasons then that's fine. I self host media for various reasons, but I also give all my email/calendar/docs/photos over to a big tech company because I'm not motivated by that aspect.

chaz6•2h ago
I would really like some kind of agnostic backup protocol, so I can simply configure my backup endpoint using an environment variable (e.g. `-e BACKUP_ENDPOINT=https://backup.example.com/backup -e BACKUP_IDENTIFIER=xxxxx`), then the application can push a backup on a regular schedule. If I need to restore a backup, I log onto the backup app, select a backup file and generate a one time code which I can enter into the application to retrieve the data. To set up a new application for backups, you would enter a friendly name into the backup application and it would generate a key for use in the application.
dangus•2h ago
I use Pika Backup which runs on the BorgBackup protocol for backing up my system’s home directory. I’m not really sure if this is exactly what you’re talking about, though. It just sends backups to network shares.
Waterluvian•2h ago
Maybe apps could offer backup to stdout and then you pipe it. That way each app doesn’t have to reason about how to interact with your target, doesn’t need to be trusted with credentials, and we don’t need a new standard.
PaulKeeble•1h ago
At the moment I am docker compose down everything, run the backup of their files and then docker compose up -d again afterwards. This sort of downtime in the middle of the night isn't an issue for home services but its also not an ideal system given most wont be mid writing a file at the time of backup anyway because its the middle of the night! But if I don't do it the one time I need those files I can guarantee it will be corrupted so at the moment don't feel like there are a lot of other options.
elemdos•1h ago
I’ve also found AI to be super helpful for self-hosting but in a different way. I set up a Pocketbase instance with a Lovable-like app on top (repo here: https://github.com/tinykit-studio/tinykit) so I can just pull out my phone, vibecode something, and then instantly host it on the one server with a bunch of other apps. I’ve built a bunch of stuff for myself (journal, CRM, guitar tuner) but my favorite thing has been a period tracker for a close friend who didn’t want that data tracked + sold.
1shooner•1h ago
Others here mention Coolify for a homeserver. If you're looking for turnkey docker-compose based apps rather than just framework/runtime environments, I will recommend the runtipi project. I have found it to be simple and flexible. It offers an 'app store' like interface, and supports hosting your own app store. It manages certs and reverse proxy via traefik as well.

https://runtipi.io/

indigodaddy•1h ago
Cosmos Cloud is great too. I use it on a free tier OCI Ampere 24G VM

https://cosmos-cloud.io/

austin-cheney•1h ago
I have found that storage is up in price more than 60% from last year.

I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio

tezza•1h ago
Wait… tailscale connection to your own network, and unsupervised sysadmin from an oracle that hallucinates and bases its decisions on blog post aggregates?

p0wnland. this will have script kiddies rubbing their hands

asciii•1h ago
Hope OP has nice neighbors because sharing that password is basically keys to this kingdom
amelius•1h ago
> The reason is simple: CLI agents like Claude Code make self-hosting on a cheapo home server dramatically easier and actually fun.

But I want to host an LLM.

shamiln•1h ago
Tailscsle was never the unlock for me, but I guess I never was the typical use case here.

I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.

Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).

I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.

My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.

zrail•44m ago
Curious how long you've been sitting on the IP block. I've been nosing around getting an ASN to mess around with the lower level internet bones but a /24 is just way too expensive these days. Even justifying an ASN is hard, since the minimum cost is $275/year through ARIN.
bakies•25m ago
Is that the minimum for an ASN? /24 is a lot of public IP space! I'd expect just to get a static IP from and ISP if I were to coloc like this
zrail•12m ago
The minimum publicly routable IPv4 subnet is /24 and IPv6 is /48. IPv6 is effectively free, there are places that will lease a /48 for $8/year, whereas as far as I can tell it's multiple thousands of USD per year to acquire or lease a /24 of IPv4.
wswin•1h ago
Home NAS servers are already shipped with user friendly GUI. Personally I haven't used them, but I certainly would prefer it, or recommend it to tech-illitarate people instead of allowing LLM to manage the server.
zebnyc•1h ago
Basic question: If I wanted a simple self hosting solution for a bot with a database, what is the simplest solution / provider I can go with. This bot is just for me doesn't need to be accessible to the general public.

Thanks

chasing0entropy•26m ago
Ask chatGPT bro
cryptica•1h ago
I started self-hosting after noticing that my AWS bill increased from like $300 per month to $600 per month within a couple of years. When looking at my bill, 3/4 of the cost was 'AWS Other'; mostly bandwidth. I couldn't understand why I was paying so much for bandwidth given that all my database instances ran on the same host as the app servers and I didn't have any regular communication between instances.

I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?

Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.

Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.

I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.

Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?

jdsully•22m ago
The most likely culprit was talking to other nodes via their public IP instead of their local ones. That gets billed as interent traffic (most expensive). The second culprit is your database or other nodes are in different AZs and you get a x-zone bandwidth charge.

Bandwidth inside the same zone is free.

dwd•1h ago
Been self-hosting for last 20 years and I would have to say LLMs were good for generating suggestions when debugging an issue I hadn't seen before, or for one I had seen before but was looking for a quicker fix. I've used it to generate bash scripts, firewall regex.

On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.

MrDarcy•1h ago
The best solution I’ve found for probes is to put all eggs into the basket listening on 443.

Haproxy with SNI routing was simple and worked well for many years for me.

Istio installed on a single node Talos VM currently works very well for me.

Both have sophisticated circuit breaking and ddos protection.

For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.

I expose one or two things to the public behind an oauth2-proxy for authnz.

Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.

aaronax•1h ago
And use a wildcard cert so that all your services don't get proved due to cert transparency logs.
syndacks•1h ago
Can the same thing be said for using docker compose etc on a VPS to host a web app? Ie you can get the ergonomic / ease of using Fly, Renderer?

Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”

CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).

So the tradeoff has changed from:

“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”

recvonline•1h ago
I started the same project end of last year and it’s true - having an LLM guide you through the setup and writing docs is a real game changer!

I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.

chasd00•1h ago
What I do at home is ubuntu on a cheap small computer I found on ebay. ufw blocks everything except 80, 443, and 22. Setup ssh to not use passwords and ensure nginx+letsencrypt doesn’t run as root. Then, forward 80 and 443 from my home router to the server so it’s reachable from the internet. That’s about it, now I have an internet accessible reverse proxy to surface anything running on that server. The computers on the same LAN (just my laptop basically) have host file entries for the server. My registrar handles DNS for the external side (routers public ip). Ssh’ing to the server requires a lan IP but that’s no big deal I’m at home whenever I’m working on it anyway.
dizhn•57m ago
Put wireguard on that thing and don't expose anything on your public IP. Better yet don't have a public IP. Just port forward the wireguard IP from your router. That's it. No firewall no nothing. Not even accidental exposure.
nick2k3•56m ago
All fine and great with Tailscale until you company places an iOS restriction on external VPNs and your work phone is also your primary phone :(
jacobthesnakob•50m ago
My work WiFi blocked traffic to port 51820, the default WireGuard port. I was wondering why my VPN started failing to handshake one day. I changed my ports to 51821 that night and back in business. I checked our technology policy and there’s no “thou shalt not use a VPN” clause so no clue why someone one day decided to drop WireGuard traffic on the network.
ivanjermakov•47m ago
Usually you can ask for a separate phone for work. I can't stand when personal devices are poisoned with Intune and other company crap.
CuriouslyC•56m ago
Tailscale is pretty sweet. Cloudflare WARP is also pretty sweet, a little clunkier but you get argo routing for free and I trust Cloudflare for security.
JodieBenitez•55m ago
So it's self hosting but with a paid and closed saas dependency ? I'll pass.
RicoElectrico•49m ago
I just use Proxmox on Optiplex 3060 micro. On it, a Wireguard tunnel for remote admin. The ease of creating and tearing down dedicated containers makes it easy to experiment.
fhennig•43m ago
I think it's great that people are getting into self-hosting, but I don't think it's _the_ solution to get us off of big tech.

Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.

This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.

nojs•35m ago
This post is spot on, the combo of tailscale + Claude Code is a game changer. This is particularly true for companies as well.

CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.

It also means you can literally host the tools on a server in your office, if you really want to.

Putting CC on the server makes this set up even better. It’s extremely good at system admin.

fassssst•29m ago
Umm, what happened to zero trust? Network security is not sufficient.
thrownawaysz•22m ago
I went down the self host route some years ago but once critical problems hit I realized that beyond a simple NAS it can be a very demanding hobby.

I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

cyberax•16m ago
Long time ago, it was popular for ISPs offer a small amount of space for personal websites. We might see a resurgence of this, but with cheap VPS. Eventually.
didntknowyou•19m ago
idk exposing your home network to the world and trusting AI will produce secure code is not a risk I want to take
Dbtabachnik•13m ago
How is readcheck any different than using raindrop.io?