frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

In Defense of Ruthless Managers

https://www.seangoedecke.com/ruthless-managers/
1•incidentnormal•30s ago•0 comments

Bank of England warns of growing risk that AI bubble could burst

https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bu...
1•usgroup•1m ago•0 comments

Dancing dust devils trace raging winds on Mars

https://www.esa.int/Science_Exploration/Space_Science/Mars_Express/Dancing_dust_devils_trace_ragi...
2•layer8•1m ago•0 comments

How Trump Threw a Wrench into Credit Markets

https://www.wsj.com/finance/how-trump-threw-a-wrench-into-credit-markets-21a4c89b
1•petethomas•1m ago•0 comments

ESP32 and Termux

https://blog.gavide.dev/blog/esp32-and-termux
1•gavide•2m ago•0 comments

React: Why We Built an Elite Incident Response Team

https://blog.cloudflare.com/introducing-react-why-we-built-an-elite-incident-response-team/
1•PranaFlux•2m ago•1 comments

Response Rates to Gov Surveys Declining Significantly

1•insane_dreamer•3m ago•0 comments

Tariffs Are Way Up. Interest on Debt Tops $1T. and Doge Didn't Do Much

https://www.wsj.com/economy/federal-budget-fiscal-2025-e8d21595
1•JumpCrisscross•3m ago•0 comments

YouTube Reveals Plan to Allow Banned Creators to Return to Platform

https://www.hollywoodreporter.com/business/digital/youtube-plan-creators-banned-return-platform-1...
1•LordAtlas•5m ago•0 comments

More Marijuana Users Are Crash Dummies

https://www.wsj.com/opinion/marijuana-car-crash-deaths-study-wright-state-university-0f762ca1
1•landl0rd•7m ago•1 comments

Cloudflare just got faster and more secure, powered by Rust

https://blog.cloudflare.com/20-percent-internet-upgrade/
1•thadt•8m ago•0 comments

'This is not a bubble': Nvidia climbs toward record

https://finance.yahoo.com/news/nvidia-stock-climbs-amid-reported-uae-export-approval-wall-street-...
1•pera•8m ago•1 comments

Container Host Shenanigans

https://some-natalie.dev/blog/host-risks/
2•mooreds•9m ago•0 comments

Fireman Sam (Commodore 64)

http://retrovania-vgjunk.blogspot.com/2016/11/fireman-sam-commodore-64.html
7•jandeboevrie•9m ago•1 comments

Claude Code now supports plugins

https://www.anthropic.com/news/claude-code-plugins
1•BrutalCoding•11m ago•1 comments

Getting a Hypergraph of Functions to a Browser

https://www.systeminit.com/blog/getting-hypergraph-of-functions-to-browser/
1•mooreds•11m ago•0 comments

McCormick spice company has a $140M tariff problem

https://www.thebanner.com/economy/mccormick-spice-trump-tariffs-AWP7P5ATLBBMBBSEKXMUQ4HENY/
2•mooreds•12m ago•0 comments

Desmond Doss

https://en.wikipedia.org/wiki/Desmond_Doss
1•downboots•13m ago•0 comments

Which programming language does AI write best? Python, JavaScript or Elixir?

https://revelry.co/insights/artificial-intelligence/which-language-is-best-for-ai-code-generation/
2•pepperoncini•14m ago•0 comments

Understanding Complex Adaptive Systems

https://scisimple.com/en/articles/2025-07-14-understanding-complex-adaptive-systems--a98v0jg
1•rolph•15m ago•0 comments

China confirms solar panel projects are irreversibly changing desert ecosystems

https://glassalmanac.com/china-confirms-solar-panel-projects-are-irreversibly-changing-desert-eco...
3•bookofjoe•15m ago•0 comments

Vite: The Documentary

https://www.youtube.com/watch?v=bmWQqAKLgT4
2•doppp•17m ago•0 comments

Show HN: GYST – A new take on the desktop interface (alpha)

https://www.youtube.com/watch?v=AcWzuBBuiPM
2•arnaudbd•19m ago•1 comments

US PC shipments hit the buffers as tariffs take their toll

https://www.theregister.com/2025/10/09/us_pc_shipments_flat_trump_tarriffs/
1•rntn•19m ago•0 comments

N.Y. Court Holds: SEC. 230 and First Amendment Protect Algorithmic Recs

https://www.cahill.com/publications/client-alerts/2025-10-07-ny-appellate-court-holds-that-sectio...
1•reliabilityguy•22m ago•1 comments

AI CLI/MCP about to hit 10k on NPM goes OPEN-SOURCE

https://www.faf.one/blog/v3-launch
1•wolfejam•22m ago•1 comments

Predatory monetization schemes in video games and internet gaming disorder

https://www.researchgate.net/publication/325479259_Predatory_monetization_features_in_video_games...
2•redbell•23m ago•1 comments

Comparison of Brain and Neuropil Size Between Social and Non-Social Spiders

https://onlinelibrary.wiley.com/doi/10.1111/1749-4877.13033
1•PaulHoule•24m ago•0 comments

April 2025 Blackout Report

https://www.entsoe.eu/publications/blackout/28-april-2025-iberian-blackout/
1•yuppiepuppie•26m ago•0 comments

What concessions did Israel, Hamas make to reach hostage-ceasefire deal in Gaza?

https://www.jpost.com/israel-news/article-869898
1•7402•26m ago•0 comments
Open in hackernews

Why Self-Host?

https://romanzipp.com/blog/why-a-homelab-why-self-host
157•romanzipp•2h ago

Comments

neko_lover•1h ago
interesting to find out there are self-hostable location tracking solutions as replacement for google location services and the like!
simonw•1h ago
The existence of Tailscale has made me a lot less scared of self-hosing than I used to be, since it provides a method of securing access that's both robust and easy to setup.

... but I still worry about backups. Having encrypted off-site backups is essential for this to work, and they need to be frequently tested as well.

There are good tools for that too (I've had good experiences with restic to Cloudflare B2) but assembling them is still a fair amount of overhead, and making sure they keep working needs discipline that I may want to reserve for other problems!

cls59•1h ago
The control plane of Tailscale can even be self-hosted via the Headscale project:

https://github.com/juanfont/headscale

As for backups, I like both https://github.com/restic/restic and https://github.com/kopia/kopia/. Encryption is done client-side, so the only thing the offsite host receives is encrypted blobs.

fundatus•1h ago
For anyone looking for a convenient way to set restic up: Backrest[1] provides a docker container and a web interface to configure, monitor and restore your restic backups.

[1] https://github.com/garethgeorge/backrest

romanzipp•1h ago
That's right. I also haven't solved the backup problem perfectly but I'd love to dive in deeper in the future. Well-tested is probably the important aspect in this
Sanzig•1h ago
I'm currently using Restic + Backblaze, but I'm building a new NAS with OpenZFS. My plan for it is to use ZFS send to backup whole datasets automatically. I was thinking of giving zfsbackup-go [1] a try, since it allows using ZFS send with any S3 object storage provider. No idea how well it'll work, but I'll give it a shot.

[1] https://github.com/someone1/zfsbackup-go

smiley1437•1h ago
I value my time as well that's why I have 2 Synology devices, one at my home, one at my sibling's home.

Both on Tailscale and we use Hyperbackup between them.

It was very easy to set up and provides offsite backups for both of us.

Synology very recently (a day ago) decided to allow 3rd party drives again with DSM 7.3.

move-on-by•1h ago
I do as much self hosting as I can, but at the end of the day it requires buy-in by all users to be effective. It can create a lot of friction otherwise. I’ve accepted it’s just not going to happen.

The absolutely most important item (IMO) is photos- which I frankly do not trust Apple’s syncing logic to not screw up at some point. I’ve taken the approach that my self-hosting _is_ the backup. They lock me out or just wipe everything, no problem I have it all backed up. If the house burns down- everything is still operational.

jasode•59m ago
>... but I still worry about backups.

For me, it's not just off-site backups, it's also the operational risks if I'm not around which I wrote about previously: https://news.ycombinator.com/item?id=39526863

In addition to changing my mind about self-hosting email, my most recent adventure was self-hosting Bitwarden/Vaultwarden for passwords management. I got everything to work (SSL certificates, re-startable container scripts to survive server reboots, etc) ... but I didn't like the resultant complexity. There was also the random unreliability because a new iOS client would break Vaultwarden and you'd have to go to github and download the latest bugfix. There's no way for my friend to manage that setup. She didn't want to pay for a 1Passord subscription so we switched to KeePass.

I'm still personally ok with self-hosting some low-stakes software like a media server where outages don't really matter. But I'm now more risk-averse with self-hosting critical email and passwords.

EDIT to reply: >Bitwarden client works fine if server goes down, you just can't edit data

I wasn't talking about the scenario of a self-hosted Vaultwarden being temporarily down. (Although I also didn't like that the smartphone clients will only work for 30-days offline[1] which was another decision factor to not stay on it.)

Instead, the issue is Bitwarden will make some changes to both their iOS client and their own "official" Bitwarden servers which is incompatible with Vaultwarden. This happens because they have no reason to test it on an "unofficial" implementation such as Vaultwarden. That's when you go to the Vaultwarden Github "Issues" tab and look for a new git commit with whatever new Rust code makes it work with the latest iOS client again. It doesn't happen very frequently, but it happened often enough that it makes it only usable for a techie (like me) to babysit. I can't inflict that type of random broken setup on the rest of my family. Vaultwarden is not set-and-forget. (I'm alo not complaining about Bitwarden or Vaultwarden and those projects are fine. I'm just being realistic about how the self-hosted setup can't work without my IT support.)

[1] Offline access in Bitwarden client only works for 30 days. : https://bitwarden.com/blog/configuring-bitwarden-clients-for...

npodbielski•29m ago
Bitwarden client works fine if server goes down, you just can't edit data. I am self hosting bitwarden for several years and I do not complain.
npodbielski•31m ago
You can look at https://kopia.io/ Looks quite OK. With one downside that it manages only one backup target so you can't I.e. backup to local HDD and to cloud. You need two instances.
xnx•1h ago
Defining "self-host" so narrowly as meaning that the software has to run on a server in your home closet ensures that it will always remain niche and insignificant. We should encourage anything that's not SaaS: open source non-subscription phone apps, plain old installable software that runs on Windows, cloud apps that can easily be run (and moved) between different hosts, etc.

Anything that prevents lock-in and gives control to the user is what we want.

al_borland•1h ago
We can’t have the word lose all meaning either. A cloud app that uses standard protocols and can be moved is still being run on a server you don’t own or control, by someone who could decide to change polices about data collection and privacy at any time. You can leave, but will you be able to migrate before the data is harvested? How would you ever know for sure?
FinnKuhn•20m ago
The general definition (although it can be pretty loose) is that you need to control the computer/server your software is running on. If that is a VPS or a server in your basement really doesn't matter all that much in the end when talking about if something is self-hosted or not.
srcreigh•1h ago
It does include windows installable software. People often start out by running stuff that way (maybe in Docker).
kijin•1h ago
At the very least, it should include colocating your server with somebody else who has better power and connectivity. As long as you have root, it's your server.
shadowgovt•1h ago
This era has been a long time coming.

We've known for decades now that the philosophy underpinning Free Software ("it's my computer and I should be able to use it as I wish") breaks down when it's no longer my computer.

Attempts were made to come up with a similar philosophy for Cloud infrastructure, but those attempts are largely struggling; they run into logical contradictions or deep complexity that the Four Essential Freedoms don't have. Issues like

1. Since we don't own the machines, we don't actually know what is needed to maintain system health. We are just guessing. Every new collected piece of information on our information is an opportunity for an argument.

2. Even if we can make arguments about owning our data, the arguments about owning metadata on that data, or data on the machines processing our data, are much murkier... Yet that data can often be reversed back to make guesses about our data because manipulation of our data creates that metadata.

3. With no physical control of the machines processing the data, we are de-facto in a trust relationship with (usually) strangers, a trust relationship that is generally not the case when we own the hardware; who cares what the contract says when every engineer at the hosting company has either physical access to the machine or a social relationship with someone who does, a relationship we lack? When your entire email account is out in the open or your PII has been compromised because of either bad security practices or an employee deciding to do whatever they want on their last day, are you really confident that contract will make you whole?

If there can be, practically, no similar philosophical grounding to the Four Freedoms, the conclusion is that cloud hosting is incompatible with those goals and we have to re-own the hardware to maintain the freedoms, if the freedoms matter.

sksksk•1h ago
With self hosting email, if the digital sovreignty aspect is more important to you than the privacy aspect...

What I do is use gmail with a custom domain, self host an email server, and use mbysnc[1] to always be downloading my emails from gmail. Then I connect to that email server for reading my emails, but still use gmail for sending.

It also means that google can't lock me out of my emails, I still retain all my emails, and if I want move providers, I simply change the DNS records of my domain. But I don't have any issues around mail delivery.

jraph•1h ago
Why not also do the sending? Deliverability concerns?
sksksk•1h ago
Yep exactly, it removes a whole class of potentially problems.

Doing the sending myself wouldn't improve my digital sovreignty, which is my primary motivation.

singron•1h ago
Not OP, but yes. For personal use, you don't have enough traffic to establish reputation, so you get constantly blocked regardless of DKIM/DMARC/SPF/rDNS. Receiving mail is reliable though, so you can do that yourself and outsource just sending to things like Amazon SES or SMTP relays.
npodbielski•1h ago
I did all of those DNS shnigannas with spf, dmarc and others ones like 6 years ago.

I think I had problems with my emails like 2 twice , with one exchange server of some small recruitment company. I think it was misconfigured.

Ah there were also some problem with gmail at the beginning they banned my domain because I was sending test emails to my own account there. I had to register my domain on their BS email post master tools website and configure my DNS with some key.

In overall I had much more problem with automatic backups, services going down for no reason, IPs being dynamic and etc. Email server just works.

carlosjobim•48m ago
The custom domain is all you need for complete e-mail sovereignty. As long as you have it, you can select between hundreds (thousands?) of providers, and take your business elsewhere at any time.
Havoc•1h ago
Also just life stability. If i figure out a foss thing once i can functionally use that for life as personal infra

A SaaS - they could change price tomorrow or change terms or do any number of things that could be an issue. It’s a severely asymmetrical dynamic

Don’t think I’ll ever do email though

xoa•1h ago
This list of "why self host" focuses almost entirely on privacy/sovereignty which, as the author admits, has come to be a pretty standard reason given. But I think there are plenty of purely practical ones as well, depending on your specific situation. There's a spectrum here from self-hosting to leaving it all to 3rd parties, and you can mix and match to get the most value out of it. But I'd add:

- Use case/cloud business model mismatch: ultimately much of the value of cloud services comes from flexibility and amortization across massive audiences. Sometimes that's exactly what one might be after. But sometimes that can leave a big enough mismatch between how it gets charged for vs what you want to do that you will flat out save money, a lot of money, very fast with your own metal you can adjust to yourself.

- Speed: Somewhat related to above but on the performance side instead of cost. 10G at this point is nothing on a LAN and it's been regularly easy to pick up used 100G Chelsio NICs for <$200, I've got a bunch of them. Switches have been slowly coming down in price as well, Mikrotik's basic 4 port 100G switch is $200/port brand new. If you're ok with 25 or 40 can do even less. Any of those is much, much faster (and of course lower latency) then the WAN links a lot of us have access to, even at a lot of common data centers that'd be quite the cost add. And NVMe arrays have made it trivial to saturate that, even before getting into the computing side. Certainly not everyone has that kind of data and wants/needs to be able to access it fast offline, but it's not useless.

- Customization: catch all for beyond all-of-the-above, but just you really can tune directly to what you're interested in terms of cpu/memory/gpu/storage/whatever mix. You can find all sorts of interesting used stuff for cheap and toss it in if you want to play with it. Make it all fit you.

- Professional development: also not common, but on HN in particular probably a number of folks would derive some real benefit from kicking the tires on the various lower level moving parts that go into the infrastructure they work with at a higher level normally. Once in awhile you might even find it leads to entire new career paths, but I think even if one typically works with abstractions having a much better sense of what's behind them is occasionally quite valuable.

Not to diminish the value of privacy/sovereignty either, but there are had dollar/euro/yen considerations as well. I also think self hosting tends to build on itself, in that there can be a higher initial investment in infrastructure but then previously hard/expensive adaptions get easier and easier. Spinning up a whole isolated vm/jail/vlan/dynamic allocation becomes trivial.

Of course, it is upfront investment, you are making some bets on tech, and it's also just plain more physical stuff, which takes up physical space of yours. I think a fair number of people might get value out of the super shallow end of the pool (starting with having your own domain) but there's nothing wrong with deliberately leaning on remote infra in general for most. But worth reevaluating from time to time, because the amount of high value and/or open source stuff available now is just wonderful. And if we have a big crash might be a lot of great deals to pick up!

PhilipRoman•1h ago
Performance is definitely a big factor. I used to think CI was inherently slow and nothing could be done there until I started to self host local runners.
npodbielski•10m ago
Seems like you like networking.
podgietaru•1h ago
I worked on getting [Omnivore](https://github.com/omnivore-app/omnivore) from cloud to self hosting.

I never appreciated the value of Self-hosting until then. I was so sick of finding new services to do essentially the same thing. I just wanted some stability.

Now I can continue using the thing I was already using, and have developed my own custom RSS Reader ontop of Omnivore.

I don't need to care about things breaking my flow. I can update the parsing logic if websites break, or I want to bypass some paywalls. It really changed my view on Self-hosting.

throw-10-8•1h ago
He mentions nextcloud, has anyone been self-hosting this for a small org with 100-200 users?
pauleee•1h ago
Kinda. I use managed Nextcloud by Hetzner (StorageShare) for ~20 people with their smallest instance (1TB, 4.50 EUR/month) and connected it with a Collabora hosted on the smallest Hetzner VPS (this could use more cores).

If you wanna self-host completly look at https://github.com/nextcloud/all-in-one . I have this running on my NAS for other stuff, but it just works out of the box.

Edit: and it scales. Orgs with a lot more people use it for 10k users or more. And it doesn't need a 100 EUR/month setup, from what I experienced.

throw-10-8•45m ago
Yeah I tested it out with the hetzner app on their smallest dedicated server and it ran fine.

Is storage share the managed service?

dizhn•53m ago
Yes it's fine. Do you have any particular questions?
throw-10-8•47m ago
What does your usage look like? My use would be about 30 heavy daily users, another hundred sporadic. Mostly doc editing and video calls.

What kind of hosting infra are you using? Hetzner seems popular.

Any major recent security concerns, it seems to have a large attack surface.

dizhn•24m ago
We use it as primarily a file sharing thing. We do not use it for video calls (and I woulnd't recomment it for that purpose).Last time I tried integrating with an office suite server was also a pain in the ass. I do use its calendar and dav addressbook because it works fairly well.

The only security thing we've done is disable a few paths in the web configuration and only allow SSO logins. (Authentik). You can also put it behind Authentik's embedded proxy for more security. I didn't do it because of the use case with generic calendar/addresbook software.

Hetzner is good. Great even, in terms of what you get for the money. They do provide mostly professional service. You will not get one iota of extra service other than what they promise. VERY German in that regard and very unapologetic about it. And don't talk about them in public with your real identity attached. They ban people for arbitrary reasons and have their uber fans (children with a 4 dollar vps) convince other fellow users that if you got banned you must have been a Russian hacker trying to infiltrate the Hague.

aborsy•45m ago
Look at AIO.

There are institutions with several thousands of employees that use Nextcloud, including mine.

I run an installation for our family, and it’s been problem free.

throw-10-8•44m ago
Great, do you use their video conferencing (Talk?) at that scale?
thayne•1h ago
I wish articles like this would include recommendations on how to choose hardware to run your self-hosted services on.
schmookeeg•1h ago
"More RAM than you think you'll need" -- particularly if you virtualize. :)
npodbielski•43m ago
Why? I was running like 15 containers on a hardware with 32gb of ram. You could probably safely use disk swap as additional memory for less frequent used applications, though I did not check.
troupo•55m ago
And things like "this is a rack you can use, it will not cost you a kidney, and it will not blow your eardrums out with noise"
therealfiona•51m ago
What ever you have laying around is a great starting point.

It all comes down to what you want to spend vs what you want to host and how you want to host it.

You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way. Or you could use an old computer and get just as far. Or you could use a full blown rack mount server with a real IPMI. Or you could use a VPS and accomplish the same thing in the cloud.

thayne•42m ago
And what if I don't have anything lying around?
chasd00•25m ago
A mid-range gaming build without the GPU is capable of running a full saas stack for a small company let alone an individual.

https://pcmasterrace.org/builds

FinnKuhn•23m ago
Depends on what you want to do with it. To start any old free PC that you can find online is going to work for experimenting with it.
troupo•31m ago
> You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way.

No, you couldn't, and no, you wouldn't.

To build a swarm you need a lot of fiddling and tooling. Where are you keeping them? How are they all connected? What's the clustering software? How is this any better than an old PC with a few SSDs?

Raspberry Pi with any amount of RAM is an exercise in frustration: it's abysmally slow for any kind of work or experimentation.

Really, the only useful advice is to use an old PC or use a VPS or a dedicated server somewhere.

npodbielski•47m ago
I would not use nuc like this guy. Had one and it was slow and it have limited capacity.

Then I had my old PC and it was very good but I wanted more nvme disks and motherboard supported only 7.

Now I am migrating to threadripper which is a bit overkill but I will have ability to run 1 or two GPUs along with 23 nvme disks for example.

NoiseBert69•27m ago
I'm your opposite :-)

Intel n100 with 32GB RAM and single big SSD here (but with daily backups).

Eats roughly 10 Watts and does the job.

npodbielski•5m ago
If this does the job for you sure. For me they were very pricey at the time comparing to my old PC Intel core i3 that I had already lying around. And power cost does not matter really in my case.
import•6m ago
I have two NUC’s (ryzen 7 and intel i5) they’re rock solid.
codegeek•1h ago
"start self-hosting more of your personal services."

I would make the case that you should also self host more as a small Software/SAAS business and it is not quite the boogeyman that a lot of cloud vendors want you to think.

Here is why. Most software projects/businesses don't require the scale and complexity for which you truly need the cloud vendors and their expertise. For example, you don't need Vercel to deploy NextJS or whatever static website or even netlify. You can setup Nginx or Caddy (my favorite) on a simple VPS with Ubuntu etc and boom. For majority of projects, that will do.

90%+ of projects can be self hosted with the following:

- A well hardened VPS server with good security controls. Plenty of good articles online on how to do the most important things (remove root login, ssh should only be key based etc).

- Setup a reverse proxy like Caddy (my favorite) or Nginx etc. Boom. Static files can now be served. Static websites can be served. No need for CDN etc unless you are talking about millions of requests per day.

- Setup your backend/API with something simple like supervisor or even the native systemd.

- The same Reverse proxy can also forward requests to backend and other services as needed. Not that hard.

- Self host a mysql/postgres database and setup the right security controls.

- Most importantly: Setup backups for everything using a script/cron and test them periodically.

- IF you really want to feel safe against DOS/DDOS etc, add cloudflare in front of everything.

So you end up with:

Cloudflare/DNS=>Reverse Proxy (Caddy/Nginx)=>Your App.

- You want to deploy ? Git pull should do it for most projects like PHP etc. If you have to rebuild binary, it will be another step but possible.

You don't need Docker or containers. They can help but not needed for small to even mid sized projects.

Yes, you can claim that a lot of these things are hard and I would say they are not that hard. Majority of projects don't need the web scale or whatever.

isodev•1h ago
And there is an extra perk: Unlike cloud services, system skills and knowledge are portable. Once you learn how systemd or ufw or ssh works, you can apply it to any other system.

I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.

neoromantique•1h ago
>I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.

But that is irrelevant as Docker brings more to the table that a simple Debian server cannot by design. One could argue that lxd is sufficient for these, but that is even more hassle than Docker.

codegeek•52m ago
Well said. For me, "how to administer a website on a Debian server" is a must if you work in Web Dev because hosting a web app should not require you to depend on anyone else.
bluGill•58m ago
A small business should onlyself host if they are a hosting company. everyone else should pay their local small business self hosting company to host for them.

This is not a job for the big guys. You want someone local who will take care of you. They also come when a computer fails, ensuring updates are applied to them. Not by come I mean physically sending a human to you. This will cost some money but you should be running your business not trying to learn computers.

codegeek•52m ago
I meant a small software/SAAS business. I would agree with you about a non software business. Edited my comment.
BinaryIgor•50m ago
Exactly; but I would rather say that you don't need CDN unless you have tens of thousands of requests per second and your user base is global; single powerful machine can easily handle thousands and tens of thousands of requests per second
codegeek•49m ago
Agreed. I was being generous to the CDN lovers :). Peope don't know how powerful static file servers like Nginx and Caddy are. You don't need no CDN.
dlisboa•47m ago
Issue is network saturation. Most VPSs have limited bandwidth (1Gbps), even if their CPUs could serve tens of thousands of req/s.
BinaryIgor•41m ago
You can always host your stuff on a few machines and then create a few DNS A records to load balance it on the DNS level :)
Sohcahtoa82•15m ago
Even 1 Gbps is plenty to handle 1,000 connections unless you're serving up video.

That's 1 Mbps per user. If your web page can't render (ignoring image loading) within a couple seconds even on a connection that slow, you're doing something wrong. Maybe stop using 20 different trackers and shoving several megabytes of JavaScript to the user.

kijin•50m ago
> No need for CDN etc unless you are talking about millions of requests per day.

Both caddy and nginx can handle 100s of millions of static requests per day on any off-the-shelf computer without breaking a sweat. You will run into network capacity issues long before you are bottlenecked by the web server software.

czhu12•37m ago
This is why I built https://canine.sh -- to make installing all that stuff a single step. I was the cofounder of a small SaaS that was blowing >$500k / year on our cloud stack

Within the first few weeks, you'll realize you also need sentry, otherwise, errors in production just become digging through logs. Thats a +40/m cloud service.

Then you'll want something like datadog because someone is reporting somewhere that a page is taking 10 seconds to load, but you can't replicate it. +300/m cloud service.

Then, if you ever want to aggregate data into a dashboard to present to customers -- Looker / Tableau / Omni + 20k / year.

Data warehouse + replication? +150k / year

This goes on and on and on. The holy grail is to be able to run ALL of these external services in your own infrastructure on a common platform with some level of maintainability.

Cloud Sentry -> Self Hosted Sentry Datadog -> Self Hosted Prometheus / Grafana Looker -> Self Hosted Metabase Snowflake -> Self Hosted Clickhouse ETL -> Self Hosted Airbyte

Most companies realize this eventually and thats why they eventually move to Kubernetes. I think its also why often indie hackers can't quite understand why the "complexity" of Kubernetes is necessary, and just having everything run on a single VPS isn't enough for everything.

threetonesun•5m ago
This assumes you're building a SAAS with customers though. When I started my career it was common for companies to build their own apps for themselves, not for all companies to be split between SAAS builders and SAAS users.
mikepurvis•21m ago
The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.

For that reason alone I'd be tempted to do GHA workflow -> build container image and push to private registry -> trivial k8s config that deploys that container with the proper ports exposed.

Run that on someone else's managed k8s setup (or Talos if I'm self hosting) and it's basically exactly as easy as having done it on my own VM but this way I'm only responsible for my application and its interface.

rwendt1337•1h ago
> Radicale (Python, basic web ui, only single user, does not work with apple devices from my experience)

it does work with apple devices from my experience

turtlebits•1h ago
Self hosting is great and I'm thankful for all the many ways to run apps on your own infra.

The problem is backup and upgrades. I self host a lot of resources, but none I would depend on for critical data or for others to rely on. If I don't have an easy path to restore/upgrade the app, I'm not going to depend on it.

For most of the apps out there, backup/restore steps are minimal or non existent (compared to the one liner to get up and running).

FWIW, Tailscale and Pangolin are godsends to easily and safely self-host from your home.

abdullahkhalids•1h ago
20 years ago grandpa could go to limewire.com, download setup.exe and click next->next->next to install a fully functional file hosting server+client. It was so easy that 1/3rd of world's computers had limewire installed in 2007 [1]. ONE FUCKING THIRD!

Today, to install even the simplest self-hosted software, one has to be effectively a professional software engineer. Use SSH, Use Docker, use tailscale, understand TLS and generate certificates, Perform maintenance updates, check backups, and million things that are automatable.

No idea why self-hosted software isn't `apt-get install` and forget. Just like Limewire. But that's the reason no one self-hosts.

[1] https://en.wikipedia.org/wiki/LimeWire

neoromantique•1h ago
>No idea why self-hosted software isn't `apt-get install` and forget. Just like Limewire. But that's the reason no one self-hosts.

Security.

As an avid self-hoster with a rack next to my desk, I shudder as I read your comment, unfortunately.

arich•56m ago
The core point is valid. As someone who self hosts, it's become so complicated to get the most basic functionality setup that someone with little to no knowledge would really struggle whereas years ago it was much simpler. Functionally now we can do much more but practically, we've regressed.
dlisboa•50m ago
Putting something on the Internet by yourself has always been outside the reach of a non-tech person. Years ago regular people weren't deploying globally available complex software from their desktops either.
jimmaswell•45m ago
What's so complicated? I'm currently on DigitalOcean but I've self-hosted before. My site is largely a basic LAMP setup with LetsEncrypt and a cron job to install security updates. Self-hosting that on one of my machines would only be a matter of buying a static IP and port forwarding.
neoromantique•43m ago
The point to an extent is to make it have friction.

If you don't care enough to figure it out, then you don't care enough to make it secure and that leads to very very bad time in modern largely internet-centric world.

abdullahkhalids•39m ago
It's in fact the opposite. If the user has to manually write/fix endless configuration files, they are likely to make a mistake and have gaps in their security. And they will not know because their settings are distinct from everyone else.

If they `apt-get install` on a standard debian computer, and the application's defaults are already configured for high-security, and those exact settings have been tested by everyone else with the same software, you have a much higher chance of being secure. And if a gap is found, update is pushed by the authors and downloaded by everyone in their automatic nightly update.

goodpoint•15m ago
100% wrong
AlfeG•1h ago
Not as click click click, but still awesome - copyparty
Pooge•1h ago
> No idea why self-hosted software isn't `apt-get install` and forget.

Some of it is. But as soon as you want your services to be accessible from the Internet, you need to have a domain name and HTTPS. To run Limewire or a BitTorrent client, you don't need a domain name yourself because you use a central server (in the case of BitTorrent, a tracker) to help you discover peers.

abdullahkhalids•48m ago
All the popular domain name services and certificate issuers have APIs. All grandpa has to do is go online and buy a domain - which is a very reasonable step that grandpa can do. Grandpa, after all buys stuff online. But after that the self-hosted app should be able to leverage the APIs to configure all the settings.
MrZander•1h ago
I don't think many people would consider limewire to be "self-hosting". That is just installing a program.
dizhn•55m ago
Self hosting involves 3 steps in my life.

1) Find the docker compose file. 2) Change the expose line to make it specific 10.0.10.1:9000 instead of the default 0.0.0.0:9000 . 3) Connect via wireguard.

(Answers the "security" point a sister comment brought up too)

dvdgsng•43m ago
Not to mention the 100 steps you have to do to get their, of course...
goodpoint•16m ago
and goodbye security...
skhameneh•55m ago
XAMPP seems to still be alive and maintained.

https://www.apachefriends.org/

I haven't used it in over a decade, but I'm glad to see it's still kicking.

lifestyleguru•52m ago
Because of American copyright predators and jurisdictions friendly to their lobbying like e.g. Germany. If you're planning to get involved in this kind of software, better think beforehand about practices to ensure your anonymity.
gregsadetsky•35m ago
Fully agreed that any rough edges/onboarding can be solved (with a lot of work, care, etc.).

I just have one main question: what would you like to self-host? Limewire was about file sharing, so the "value proposition" was clearly-ish defined. The "what does Limewire do" was clear.

Are you interested in hosting your own web site? Or email? Or google cal/drive/photos-equivalent? Some of it, all of it?

I'm genuinely curious, and also would love to know: is this a 80% of people want X (self-hosted file storage? web serving?), and then there's a very long tail of other services? Does everyone want a different thing? Or are needs power-law distributed? Cheers

floundy•35m ago
>Today, to install even the simplest self-hosted software, one has to be effectively a professional software engineer.

I’m a regular engineer, non-software, my coding knowledge is very basic, I could never be employed even as a junior dev unless I wanted to spend evenings grinding and learning.

Still I was able to set up a NAS and a personal server using Docker. I think a basic and broad intro to programming class like Harvard’s CS50 is all that would be required to learn enough to be able to figure out self-hosting.

thire•1h ago
I have been so happy moving out of Google Photos and storing everything on my NAS + cloud backup. I don't have to worry about Google re-encoding my videos and not letting me get my originals back.
tamimio•59m ago
Self hosting is great, but I am more interested in decentralized technology, whether as services or even radio. I think in the near future the world will experience major disruptions, technical, financial, or even political, that centralized solutions are rendered useless, and the average person would rather have a local connection instead (local as in both topology and physical medium, so you have your own wifi station serving the neighborhood for example), and of course self hosting will be part of it, but there should be protocols that support it either in software or hardware, so it would be great for example you host your xyz chat server instance and within the client side (phones for example) you switch to local mode in the app and connect to local server. I know some applications have already implemented this but not yet adopted and still too niche for the average person, let alone for other services besides chats. Some have already caught the potential and started making ideas around it, bitchat is an example, but relying on Bluetooth won't really do it in my opinion, instead, users would be having their own 5G BTS managed and operated locally, with an option to connect to nearby 5G networks, or similar tech like wimax.
npodbielski•16m ago
You think civilisation will go down but you will still be able to chat with people via smartphone?
fridder•57m ago
I do wonder if there is a market for a preinstalled self hosting computer or setup where the service would be automated backups, e2e encrypted of course, and perhaps high availability
jopsen•46m ago
Security updates.

And fixing things when they eventually break.

Honestly, there is a reason I still use a dreamhost shared plan. It's dirt cheap, been running forever, and I've never had to do the boring stuff.

And if they break my app, I can ask them to fix it.

If you deploy your app on a PaaS you still have to update everything inside the container.

Old school php hosting on a shared server does have some upsides - namely affordable support. (Sure, if I'm an extreme edge case support will not do much for me).

The same kind of thing for "self-hosting" would be cool.

EvanAnderson•45m ago
Synology and likely other NAS vendors are basically doing this. A buddy of mine isn't any kind of Linux sysadmin but he's running his whole home media management setup as Docker containers on a Synology NAS. I assume they have off-site backup services available, too.
tylerjl•57m ago
Another sort-of-recent development in the space has made self-hosting dramatically more accessible: even though hardware costs were reasonable before, they're now _very_ reasonable and also resource-efficient.

Repurposing an old tower would offer you enough compute to self-host services back in the day, but now an Intel NUC has plenty of resources in a very small footprint and branching out into the Raspberry Pi-adjacent family of hardware also offers even smaller power draw on aarch64 SBCs.

One experiment in my own lab has been to deploy glusterfs across a fleet of ODroid HC4 devices to operate a distributed storage network. The devices sip small amounts of power, are easy to expand, and last week a disk died completely but I swapped the hardware out while never losing access to the data thanks to separate networked peers staying online while the downed host got new hardware.

Relying on container deployments rather than fat VMs also helps to compress resource requirements when operating lots of self-hosted services. I've got about ~20 nomad-operated services spread across various small, cheap aarch64 hosts that can go down without worrying about it because nomad will just pick a new one.

jimangel2001•56m ago
My selfhosted stack includes: 1. immich 2. jellyfin 3. ghost 4. wallabag 5. freshrss 6. vaultwarden 7. nextcloud 8. overleaf/sharelatex 9. matrix server 10. pds for atproto
jopsen•50m ago
How do you upgrade to new versions?

How do ship security patches?

How do backup? And do you regularly test your backup?

I feel like upgrade instructions for some software can be extremely light, or require you to upgrade through each version, or worse.

import•8m ago
Not the OP.

I assume everything running in docker.

For containers: Upgrading new versions can be done headless by watchtower or manually.

For the host: You can run package updates regularly or enable unattended upgrades.

Backups can be easily done with cron + rclone. It is not a magic.

I personally run everything inside docker. Less things to concern.

R_Spaghetti•49m ago
You write: I'm fortunate enough to work at a company (enum.co) where digital sovereignty is not just a phrase.

info.addr.tools shows [1]: MX 1 smtp.google.com. TXT "mailcoach-verification=a873d3f3-0f4f-4a04-a085-d53f70708e84"

TXT "v=spf1 include:_spf.google.com ~all"

TXT "google-site-verification=TTrl7IWxuGQBEqbNAz17GKZzS-utrW7SCZbgdo5tkk0"

This is not just a phrase, it is a DNS entry. Using the most evil in phrases of digital sovereignty.

[1] https://info.addr.tools/enum.co

gregsadetsky•32m ago
To be fair to enum, the services they sell are around k8, an s3-equivalent, and devops. If they sold/promised self-hosting/sovereign email services, and then were "caught" using gmail, that might be a different story.

Your point stands - they're not fully completely independent. And maybe the language in the OP's article could have been different.. but the OP also specifically says "Oh no, I said the forbidden phrase: Self-hosted mail server. I was always told to never under any circumstances do that. But it's really not that deep."

They're aware of the issue, everyone is aware of the issue. It's an issue :-) But I get your point too.

chronci739•30m ago
> This is not just a phrase, it is a DNS entry. Using the most evil in phrases of digital sovereignty.

damn, this guy don’t fuck around. respect

BinaryIgor•44m ago
"Many many years ago I was running an Android phone with Google services like Google Maps. One day I was looking for a feature in my Google account and saw that GMaps recorded my location history for years with detailed geocoordinates about every trip and every visit.

I was fascinated but also scared about that since I've never actually enabled it myself. I do like the fact that I could look up my location for every point in time but I want to be in control about that and know that only I have access to that data."

This made me thing whether there are any services (or ideas thereof) that would provide this kind of functionality but story encrypted in a similar way as proton does for email; in theory, you can use this pattern - data stored but encrypted on the server, but decrypted only by the client - to rebuild many useful services while retaining full sovereignty of your data.

floundy•39m ago
What you’re describing is essentially how Apple Maps works.

https://www.apple.com/legal/privacy/data/en/apple-maps/

ikawe•11m ago
One tricky thing about maps, as they relate to privacy, is that the earth is large.

Compare that to encrypted email: if I’m sending you an encrypted message, the total data involved is minimal. To a first approximation, it’s just the message contents.

But if I want “Google Maps but private,” I first need access to an entire globe’s worth of data, on the order of terabytes. That’s a lot of storage for your (usually mobile) client, and a lot of bandwidth for whoever delivers it. And that data needs to be refreshed over time.

Typical mapping applications (like Google Maps) solve this with a combination of network services that answer your questions remotely (“Tell me what you’re searching for, and I’ll tell you where it is.”) and by tiling data so your client can request exactly what it needs, and no more, which is also a tell.

The privacy focused options I see are:

1. Pre-download all the map data, like OrganicMaps [1], to perform your calculations on the device. From a privacy perspective, you reveal only a coarse-grained notion of the area you’re interested in. As a "bonus", you get an offline maps app. You need to know a priori what areas you’ll need. For directions, that's usually fine, because I’m usually looking at local places, but sometimes I want to explore a random spot around the globe. Real-time transit and traffic-adaptive routing remain unaddressed.

2. Self-host your own mapping stack, as with Headway (I work on Headway). For the reasons above, it’s harder than hosting your own wiki, but I think it’s doable. It doesn’t currently support storing personal data (where you’ve been, favorite places, etc.), but adding that in a privacy conscious way isn’t unfathomable.

[1] https://organicmaps.app (though there are others)

[2] https://github.com/headwaymaps/headway (see a hosted demo at https://maps.earth)

igor47•43m ago
Thank you for writing this! I've been playing around with writing something similar and I getting lost going way too far up the concept chain. Like, ultimately, I self host because... Capitalism?

In my ideal world, one tech savvy person would run services for a group of their friends and family. This makes the concept more mainstream and accessible, while also creating social cohesion for that group. I think we've monetized too many of our relationships, and often have no real reason to be in community. This is a big change from most of human history, where you depended on community for survival. Building lower-stakes bonds now (I run your email, you help me fix my car) helps avoid the problem later when you really need help (old, sick) but have never practiced getting anything you need except by paying for it

TuxSH•38m ago
For self-use author has a point, but for public-facing sites not so much, because:

- infra work is thankless (see below)

- outages will last long because you're unlikely to have failovers (for disk failures, etc.), plus the time to react to these (no point in being paged for hobby work)

- more importantly, malicious LLM scrapers will put your infra under stress, and

- if you host large executable you'll likely want to do things like banning Microsoft's IP address because of irresponsible GH Actions users [1] [2] [3]

In the end it is just a lot less stress to pay someone else to deal with infra; for example, when hosting static sites on GH Pages or CF Pages, and when using CF caching solutions.

[1] https://www.theregister.com/2023/06/28/microsofts_github_gmp...

[2] https://news.ycombinator.com/item?id=36380325

[3] https://github.com/actions/runner-images/issues/7901

rob_c•35m ago
Only worth paying if you actually need it though.

And if it's a hobby, no you don't, that should be part of it, the fun is getting nocked out from orbit and figuring out how and why and how to avoid it. Stand back up again and you've learned from that mistake :p

trenchpilgrim•24m ago
We used to host production websites this way as recently as 10-15 years ago just fine. These days you can do it with as few as two machines and a good router or two. The main risk is power outages due to non-redundant power outside of a colo (solvable with a battery backup) and non-redundant public internet links (potentially solvable with a cellular failover + a lot of caching at the CDN, depending on the application).

You generally still use a CDN and WAF to filter incoming traffic when you self host (even without abusive scrapers you should probably do this for client latency). You can also serve large files from a cloud storage bucket for external users where it makes sense.

rob_c•37m ago
Cost, experience and for the paranoid (right or not) control.

The biggest downside is initial cost in time, effort and cash compared to typing in a credit card.

Ok other downsides include lack of power redundancy and decent networking which are more common in data-centers.

Other side of this is, why buy 8xa100 for that project to stick them on eBay to recoup cost when you can rent them?

fourseventy•31m ago
I'm currently self hosting my notes/journal/knowledge base with Trilium, photos with Immich, and files with File Browser, very happy with that setup so far. I just like the feeling of knowing I own my important data and that it won't go away because some third party company goes out of business or sunsets an app.
prism56•30m ago
I selfhost only things that aren't critical, I'm not hosting passwords or photos. I'd rather pay for the redundancy offered by big datacentres. I do however choose platforms that are privacy first, ente.io/Proton for example.

I do however selfhost FreshRSS, Audiobooks, Readeck, Linkding, YoutubetoRSS... Useful services that individually hosted playforms want £5 or so per month to use. The redundancy is significantly less important with these services to me compared to losing £30+ extra a month.

trenchpilgrim•29m ago
I self host photos, but my backups are cloud hosted. A cold rarely accessed backup is way cheaper and more fungible across providers than an entire photos app.
prism56•25m ago
Yeah that's fair enough. Valid approach, I went away from this due to getting family on my ente plan. I didn't want to be responsible/trusted with their images. This way the images are pretty well protected in ente's infrastructure and we can share in the same platform.
trenchpilgrim•20m ago
True, I only host my own photos, I don't want to possess anyone else's selfies or family photos for sure
jdoe1337halo•29m ago
Self hosting is awesome. I have been doing it for about a year since I quit my full time SWE job and pursued SaaS. I am using Coolify on a $20/month Hetzner server to host a wide variety of applications: Postgres, Minio (version before community neuter) for S3, Nuxt application, NextJs applications, Umami analytics, Open WebUI, and static sites. It was definitely a learning process, but now that I have everything set up, it really is just plug and play to get a new site/service up and running. I am not even using 1/4 of my server resources either (because I don't have many users xd). It is great.

https://coolify.io/docs/

Steltek•23m ago
What I think is missed in self-hosting is WHAT you're self-hosting. In priority order, you should self-host:

1. Your Data. It is the most irreplaceable digital asset. No one should see their photos, their email, their whatever, go poof because of external forces. Ensure everything on your devices are backed up to a NAS. Set a reminder for quarterly offline backups. Backups are an achievable goal for everyone, not just the tech elite.

2. Your Identity. By which I mean a domain name. Keep the domain pseudonymous. Use a trustworthy, respectable registrar. Maybe give some thought for geopolitics these days. Pay for email hosting and point your domain at them.

3. Lastly, your Apps. This is much harder work and only reasonably achievable by tech savy people.

jcon321•14m ago
We self host everything at our company as we're a data center - all the tools required for a modern development stack + modern environments.

It's great for learning and control - it's not so great for anxiety.

FinnKuhn•14m ago
While this is only one data point, looking at the stats for r/selfhosted, self-hosting seems to be exploding in popularity since last year. The subreddit now has 2.2 on average million daily unique visitors with 175 million total views over the last 12 months, which is up 132 million visitors in comparison to the 12 months before.
renegat0x0•10m ago
I self host my Search Engine / RSS reader. I track every page I visit from nearly all devices.

Since my basic search engine is self hosted nobody actually sees what I visit, and what I watch.

This is my conclusion seeing that social media algorithm is totally lost at what I would like to watch next.

Also I am in control over UI, and changes, which is a good and a bad thing

Ingon•6m ago
I also started self-hosting more and more. But instead of making services available on the internet/intranet (e.g. VPS reverse proxy/tailscale), I'm binding them to localhost and using connet [1] (cloud or self-host [2]) to cast these locally on my on my PC/phone (when I need them). These include my NAS and Syncthing instance running on my NAS and I'm looking to add more.

[1] https://connet.dev

[2] https://github.com/connet-dev/connet

alexchantavy•5m ago
In recent years I noticed RSS has gotten way less popular, even in hacker circles (or maybe that's just my perception).

I remember browsers used to have a native RSS button in the main interface and then you could curate your feed. Seems better than any news feed thing gamified to steal my attention. Sigh.

old-man-yells-at-cloud.gif

sam_lowry_•5m ago
Here's the step-by-step guide to self-hosting git repositories.

Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone git clone --bare . /tmp/repo.git You just created a copy of the .git folder without all the checked out files.

Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like with scp -r /tmp/repo.git me@server:/var/git/.

To share the repository with others, create a group, e.g. groupadd --users me git You will be able to add more users to the group with groupmod.

Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with chgrp -R git /var/repo.git and enable the group write bit on them with chmod -R g+w /var/repo.git.

This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.

There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.

But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with find /var/git -type d -exec chmod g+s \{\} +

You are done.

Want to host your git repository online? Install caddy and point to /var/git with something like

    example.com {
    root * /var/git
    file_server
    }
Your git repository will be instantly accessible via https://example.com/repo.git.