something.example.com {
root * /var/www/something.example.com
file_server
}
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression: php.example.com {
root * /var/www/wordpress
encode
php_fastcgi unix//run/php/php-version-fpm.sock
file_server
}
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for.I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'
The other thing I really don't like is if you install via a package manager to get automated updates, you don't get any of the plugins. If you want plugins you have to build it yourself or use their build service, and you don't get automatic updates.
Do you know if Caddy can self update or if is there some other easy method? Manually doing it to get the cloudflare plugin is a pain.
Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.
Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.
This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?
Maybe not hard, but Caddy seems like even less to think about.
(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)
It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.
Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.
So this change is most welcome.
But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.
In a typical distro, you would normally expect one or more virtual packages representing a profile (minimal, standard, full, etc) that depends on a package providing an nginx binary with every reasonable static-only module enabled, plus a number of separately packaged dynamic modules.
DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.
If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.
I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.
You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.
DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.
But they're the only sane options.
One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.
(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)
Nevermind, I agree!
It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).
Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.
When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.
Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.
Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!
It is not as if they couldn't already choose (to buy) such short lifetimes already.
Authoritarianism at its finest.
The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.
The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.
The feature should be done soon but they need to ensure it works across K8s flavors.
It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.
Lego supports it.
(angie is the nginx fork lead by original nginx developers that left f5)
One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.
> Nginx Introduces Native Support for Acme Protocol
IT: “It’s about fucking time!”
> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.
IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.”
Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.
While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.
By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.
> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.
I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.
This is where I get rankled.
In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.
Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”
The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.
If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.
The trust is always in the root itself.
It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.
# https://certbot.eff.org/instructions?ws=other&os=ubuntufocal
sudo apt-get -y install certbot
# sudo certbot certonly --standalone
...
# https://certbot.eff.org/docs/using.html#where-are-my-certificates
# sudo chmod -R 0755 /etc/letsencrypt/{live,archive}
So, unfortunately, this support still seems more involved than using certbot, but at least one less manual step is required.Example from https://github.com/andrewmcwattersandco/bootstrap-express
location ^~ /.well-known/acme-challenge/ {
alias <path-to-your-acme-challenge-directory>;
}
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.
Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.
Dehydrated has been major version 0 for 7 years, it's probably past due.
See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)
CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."
SemVer: "If your software is being used in production, it should probably already be 1.0.0."
Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?
You could then merge updates back from upstream.
Be the change you want to see!
Edit to comment on the edit:
> Edit: Downvote me all you want
I don't generally downvote, but if I were going to I would not need your permission :)
> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.
I assume you meant "present" there rather than "consume"?
Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.
Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".
It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.
I'm not using nginx these days because of this.
That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.
I tried dokku (and still am!) and it is so hard getting started.
For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud
This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...
Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...
What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.
I'm trying to use dokku for my homeserver.
Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.
But I’m not going back. Nginx was a real pain to configure with so many puzzles and surprises and foot guns.
I figured either somehow Let's Encrypt doesn't work out, or, everybody bakes in ACME within 2-3 years. The idea that you can buy software in 2025 which has TLS encryption but expects you to go sort out the certificate. It's like if cars had to be refuelled periodically by taking them to a weird dedicated building which is not useful to anything else rather than just charging while you're asleep like a phone and... yeah you know what I get it now. You people are weird.
In this very thread some people complain that certbot uses snap for distribution. Imagine making a feature release and having to wait 1-2 years until your users will get it on a broad scale.
I looked at Arch and they're a version behind, which surprised me. Must not be a heavily maintained arch package.
Anyway, good luck staying competitive lol. Almost everyone I knew either jumped to something more saner or in process of migrating away.
If you wanted to use LE though, you could use a more "traditional" cert renewal process somewhere out-of-band, and then provision the resulting keys/certs through whatever coordination thing you contrive (and HUP the nginxs)
Downside is obviously certificate maintenance increases, but ACME automated the vast majority of that work away.
Also does it make it easier for there to be alternatives to Let's Encrypt?
Probably like many others here, I would very much like to see Cloudflare DNS support.
johnisgood•3h ago
Caddy sounds interesting too, but I am afraid of switching because what I have works properly. :/
roywashere•3h ago
orphea•2h ago
bityard•2h ago
When I tried using Caddy with something serious for the first time, I thought I was missing something. I thought, these docs must be incomplete, there has to be more to it, how does it know to do X based on Y, this is never going to work...
But it DID work. There IS almost nothing to it. You set literally the bare minimum of configuration you could possibly need, and Caddy figures out the rest and uses sane defaults. The docs are VERY good, there is a nice community around it.
If I had any complaint at all, it would be that the plugin system is slightly goofy.