something.example.com {
root * /var/www/something.example.com
file_server
}
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression: php.example.com {
root * /var/www/wordpress
encode
php_fastcgi unix//run/php/php-version-fpm.sock
file_server
}
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for. FROM caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare \
--with github.com/greenpau/caddy-security
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY Caddyfile /etc/caddy/Caddyfile
Then just build & run it via docker compose{ acme_dns cloudflare oWN-HR__kxRoDhrixaQbI6M0uwS4bfXub4g4xia2 debug }
*.secret.domain.com {
@sso host sso.secret.domain.com
handle @sso {
reverse_proxy 192.168.200.4:9000
}
@adguard host adguard.secret.domain.com
handle @adguard {
reverse_proxy 192.168.200.4:9000
}
@forge host forge.secret.domain.com
handle @forge {
reverse_proxy http://forgejo:3000
}
# respond to whatever doesn't match
handle {
respond "Wildcard subdomain does not have a web configuration!"
}
handle_errors {
respond "Error {err.status_code} {err.status_text}"
}
}For example, from a discussion on the Caddy forum https://caddy.community/t/using-caddy-to-harden-wordpress/13...:
(harden-wordpress) {
@harden-wordpress expression `(
!{path}.matches("/wp-includes/ms-files.php$")
&& ({path}.matches("(?i)/wp-includes/.*\\.php")
|| {path}.matches("(?i)/wp-admin/includes/.*\\.php")
|| {path}.matches("(?i)/wp-content/uploads/.*\\.php")
)
)`
respond @harden-wordpress "Access denied" 403
}I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'
The other thing I really don't like is if you install via a package manager to get automated updates, you don't get any of the plugins. If you want plugins you have to build it yourself or use their build service, and you don't get automatic updates.
Do you know if Caddy can self update or if is there some other easy method? Manually doing it to get the cloudflare plugin is a pain.
Some guy retrofitted caddy to use docker labels. It looks way too complicated for me but i don't know how easy/hard it is with traefik either.
Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.
Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.
This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?
I've got no idea who F5 is. They seem legit, but that page didn't show up in my DDG search. But it's too late now. Water under the bridge.
Admittedly this was on the back of trying to use nginx-unit, which was an overall bad experience, but ¯\_(ツ)_/¯
Maybe not hard, but Caddy seems like even less to think about.
(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)
It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.
Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.
So this change is most welcome.
https://github.com/certbot/certbot/issues/8345#issuecomment-...
That’s been three years though. The EFF/Certbot team has lost so much goodwill with me over that, I won’t go back.
But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.
In a typical distro, you would normally expect one or more virtual packages representing a profile (minimal, standard, full, etc) that depends on a package providing an nginx binary with every reasonable static-only module enabled, plus a number of separately packaged dynamic modules.
DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.
If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.
https://docs.certifytheweb.com/docs/dns/providers/certifydns...
Can even be controlled quite granularly with a Lua-based updatepolicy, if you want e.g. restricting to only the ACME TXT records. [2]
[1] https://doc.powerdns.com/authoritative/dnsupdate.html
[2] https://github.com/PowerDNS/pdns/wiki/Lua-Examples-(Authorit...
I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.
You can cname _acme-challenge.foo.com to foo.bar.com.
Now, if when you do the DNS challenge, you make a TXT at foo.bar.com with the challenge response, through CNAME redirection, the TXT record is picked up as if it were directly at _acme-challenge.foo.com. You can now issue wildcard certs for anything for foo.com.
I have it on my backlog to build an automated solution to this later this year to handle this for hundreds of individual domains and then put the resulting certificates in AWS secrets manager.
I'm going to also see if I can make some sort of ACME proxy, so internal clients authenticate to me, but they cant control dns, so I make the requests on their behalf. We need to get prepared for ACME everywhere. In May 2026, its 200 day certs, it only goes down from there.
1. Your main domain is important.example.com with provider A. No DNS API token for security.
2. Your throwaway domain in a dedicated account with DNS API is example.net with provider B and a DNS API token in your ACME client
3. You create _acme-challenge.important.example.com not as TXT via API but permanent as CNAME to _acme-challenge.example.net or _acme-challenge.important.example.com.example.net
4. Your ACME client writes the challenge responses for important.example.com into a TXT at the unimportant _acme-challenge.example.net and has only API access to provider B. If this gets hacked and example.net lost you change the CNAMES and use a new domain whatever.tld as CNAME target.
acme.sh supports this (see https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo... this also works for wildcards as described there), most ACME clients do.
I also wrote an acme.sh Ansible role supporting this: https://github.com/foundata/ansible-collection-acmesh/tree/m.... Example values:
[...]
# certificate: "foo.example.com" with an additional "bar.example.com" SAN
- domains:
- name: "foo.example.com"
challenge: # parameters depend on type
type: "dns"
dns_provider: "dns_hetzner"
# CNAME _acme-challenge.foo.example.com => _acme-challenge.foo.example.com.example.net
challenge_alias: "foo.example.com.example.net"
- name: "bar.example.com"
challenge:
type: "dns"
dns_provider: "dns_inwx"
# CNAME _acme-challenge.bar.example.com => _acme-challenge.example.net
challenge_alias: "example.net"
[...]https://community.cloudflare.com/t/restrict-scope-api-tokens...
There's a NS record so *.acme-dns.example.com delegates requests to it, so each of my hosts that need a cert have a public CNAME like _acme-challenge.www.example.com CNAME asdfasf.acme-dns.example.com which points back to the acme-dns server.
When setting up a new hostname/certificate, a REST request is sent to acme-dns to register a new username/password/subdomain which is fed to acme.sh. Then every time acme.sh needs to issue/renew the certificate it sends the TXT info to the internal acme-dns server, which in turn makes it available to the world.
An A-record lookup for ns.example.com resolves to the IP of my server.
This server listens on port 53. It is a custom, small Python server using `dnslib`, which also listens on port let's say 8053 for incoming HTTPS connections.
In certbot I have a custom handler, which, when it is passed the challenge for the domain verification, sends the challenge information via HTTPS to ns.example.com:8053/certbot/cache. The small DNS-server then stores it and waits for a DNS query on port 53 for that challenge to come in, and if it does, it serves it that challenge's TXT record.
elif qtype == 'TXT':
if qname.lower().startswith('_acme-challenge.'):
domain = qname[len('_acme-challenge.'):].strip('.').lower()
if domain in storage['domains']:
for verification_code in storage['domains'][domain.lower()]:
a.add_answer(*dnslib.RR.fromZone(qname + " 30 IN TXT " + verification_code))
The certbot hook looks like this #!/usr/bin/env python3
import ...
r = requests.get('https://ns.example.com:8053/certbot/cache?domain='+urllib.parse.quote(os.environ['CERTBOT_DOMAIN'])+'&validation-code='+urllib.parse.quote(os.environ['CERTBOT_VALIDATION']))
That one nameserver-instance and hook can be used for any domain and certificate, so it is not just limited to the example.com-domain, but can also deal with challenges for let's say a *.testing.other-example.com wildcard certificate.And since it already is a nameserver, it might as well serve the A records for dev1.testing.other-example.com, if you've set the NS record for testing.other-example.com to ns.example.com.
We don't need 100s of custom APIs.
AWS IAM can be a huge pain but it can also solve a lot of problems.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...
https://repost.aws/questions/QU-HJgT3V0TzSlizZ7rVT4mQ/how-do...
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/sp...
You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.
DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.
But they're the only sane options.
One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.
(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)
Nevermind, I agree!
It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).
NB that rate limits apply https://letsencrypt.org/docs/rate-limits/
Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.
Wildcards are the only temptation.
That's so much more work than either of the options in my first comment. Aliasing a directory takes about one minute.
How so? It's just serving static files.
When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.
Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.
Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!
It is not as if they couldn't already choose (to buy) such short lifetimes already.
Authoritarianism at its finest.
It is a terrible piece of software. I use dehydrated which I'd much friendlier to automation.
The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.
I think this is kinda the OPs point, nginx an http server, why should it be messing with dns? There are plenty of other acme clients to do this with ease
It may still be required by some users, but I don't think that it makes sense for nginx
Well, I am supporting it, but I pointed why it's not as straightforward as supporting http-01.
> I don't think that it makes sense for nginx
It makes sense for nginx because ultimately I don't make certificates just for the fun of it, I do it to give it to some HTTP server. So it makes sense.
However, this isn't a future that will be not used by paid users, and F5 seems to be opposing making OSS version users lives better.
The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.
The feature should be done soon but they need to ensure it works across K8s flavors.
He’s curious where it’s being used outside of home labs and in small shops. Matt, it’s fantastic software and will only get better as go improves.
I used it in a proxy setup for ingress to kubernetes that’s overlayed across multiple clouds - for the government (prior admin, this admin killed it). I can’t tell you more information than that. Other than it goes WWW -> ALB -> Caddy Cluster * Other Cloud -> K8s Router -> K8s pod -> Fiber Golang service. :chefs kiss:
When a pod is registered to the K8s router, we fire off a request to the caddy cluster to register the route. Bam, we got traffic, we got TLS, we got magic. No downtime.
Complex root domain routing and complex dynamic rewrite logic remains behind Apache/NginX/HaProxy, a lot of apps are then served in a container architecture with Caddy for easy cert renewal without relying on hacky certbot architectures. So we don't really serve that much traffic with just one instance. Also, a lot of our traffic is bots. More than one would think.
The basic configuration being tiny makes it the perfect fit for people with varying capabilities and know how when it comes to devops. As a devops engineer, I enjoy the easy integration with tailscale.
Not sure if you‘ll read this 7 days after the fact, but an easier/caddy native way to deal with bots, in the sense of caddy-defender or Anubis would be a godsend.
Definitely something that's important. An Anubis caddy plugin is in the works too! See https://github.com/TecharoHQ/anubis/issues/16
However anything in caddy would likely still be a plugin and non native
https://github.com/nginx/nginx/blob/master/LICENSE looks like a nice normal permissive license. I don't care that there's a premium version if all the features I want are in the OSS version.
It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.
I wonder what a good solution to this would be? In theory, Nginx could call another application that handles the communication with the DNS provider, so that the user can tailor it to their needs. (The user could write it in Python or Go or whatever.) Not sure how robust that would be though.
Lego supports it.
I don’t believe DNS-over-HTTPS is relevant in this context. AFAIK, it’s used by clients who want to query a DNS server, and not for an operator who wants to create a DNS record. (Please correct me if I’m wrong.)
Let's ignore that DoH is a client oriented protocol and there's no same way to only run a DoH server without an underlying DNS server. How do you plan to get the first certificate so the query to the DoH server doesn't get rejected for invalid certificate?
- wildcard certs. DNS-01 is a strict requirement here. - certs for a service whose TLS is terminated by multiple servers (e.g. load balancers). DNS-01 is a practical requirement here because only one of the terminating servers would be able to respond during an HTTP or ALPN challenge.
Reverse-proxying or otherwise forwarding requests for .well-known/acme-challenge/ to a single server should be just as easy to set up as DNS-01.
In other words, no, it's not just as easy as setting up DNS-01. Different operational characteristics, and a need for bespoke glue code.
Wouldn't you have to do that anyway? Or is the idea that each server requests and renews a separate cert for itself? That sounds as if you'd have to watch out for multiple servers stepping on each other's toes during the DNS-01 challenge, if there is ever a situation where two or more servers want to renew their cert at the same time.
https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account...
2. Query for TXT records for the validation domain name
3. Verify that the contents of one of the TXT records match the
digest value
And then the certbot docs[2] show how it's a well-behaved client that wouldn't clobber TXT records from concurrent instances:> You can have multiple TXT records in place for the same name. For instance, this might happen if you are validating a challenge for a wildcard and a non-wildcard certificate at the same time. However, you should make sure to clean up old TXT records, because if the response size gets too big Let’s Encrypt will start rejecting it. > ... > It works well even if you have multiple web servers.
That bit about "multiple webservers" is a little ambiguous, but I think the preceding line indicates clearly enough how everything is supposed to work.
[0] https://datatracker.ietf.org/doc/html/rfc8555#section-8.4
[1] https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
(angie is the nginx fork lead by original nginx developers that left f5)
Wrote an article how to set it up https://blog.haschek.at/2023/letsencrypt-wildcard-cert.html
The solution has been evolving along these years and currently the las IETF draft is https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account...
The new proposal brings the dns-account-01 challenge, incorporating the ACME account URL into the DNS validation record name.
One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.
> Nginx Introduces Native Support for Acme Protocol
IT: “It’s about fucking time!”
> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.
IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.”
Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.
While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.
By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.
> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.
I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.
This is where I get rankled.
In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.
Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”
The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.
If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.
The trust is always in the root itself.
It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.
So you have to modify all potential clients for this constraint to be enforced. So it's effectively worthless as there is no way to roll it out in any meaningful sense.
Installing custom CA certs isn't that hard once you figure out how to do it for each application. I had to write all the docs on this for the IT team, specific to each application, because they were too lazy to do it. Painful at first, but easy after. To avoid more pain later, make the certs expire in 2036, retire before then.
Automation is the goal, and right now internal PKI is far from automated like public-facing certs are. With ACME I can set-and-forget on public stuff that's not processing sensitive data or requires a premium certificate for, but internally it still seems like the only solution is an ADCA.
Or because it would expose the web PKI for the farce it is. Some shady corporation in bumfuckistan having authority to sign certificates for .gov.uk or even just your personal website is absolutely bonkers. Certificate authority should have always been delegated just like nameserver authority is.
https://en.angie.software/angie/docs/configuration/modules/h...
I’m sure nginx will get DNS but it’s still an open question when it will support your particular registrar or if at all.
You can sidestep that by delegating the ACME keys to your own name server.
# https://certbot.eff.org/instructions?ws=other&os=ubuntufocal
sudo apt-get -y install certbot
# sudo certbot certonly --standalone
...
# https://certbot.eff.org/docs/using.html#where-are-my-certificates
# sudo chmod -R 0755 /etc/letsencrypt/{live,archive}
So, unfortunately, this support still seems more involved than using certbot, but at least one less manual step is required.Example from https://github.com/andrewmcwattersandco/bootstrap-express
location ^~ /.well-known/acme-challenge/ {
alias <path-to-your-acme-challenge-directory>;
}
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.
Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.
Dehydrated has been major version 0 for 7 years, it's probably past due.
See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)
CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."
SemVer: "If your software is being used in production, it should probably already be 1.0.0."
Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?
Why not?
You could then merge updates back from upstream.
Be the change you want to see!
Edit to comment on the edit:
> Edit: Downvote me all you want
I don't generally downvote, but if I were going to I would not need your permission :)
> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.
I assume you meant "present" there rather than "consume"?
Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.
Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".
It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.
Unfortunately, web.archive.org didn't grab an https version of my main site from around that period. My oldest server build script in my current collection does have the following note in it:
**Get the current version of dehydrated from https://github.com/dehydrated-io/dehydrated **
(Dehydrated was previously found at https://github.com/lukas2511/dehydrated)
...so I was using it back when it was under the lukas2511 account. Those tech notes however were rescued from a long-dead Phabricator installation, so I no longer have the change history for them, unless I go back and try to resurrect its database, which I think I do still have kicking around in one of my cold storage drives...But yeah, circa 2015 - 2016 should be about right. I had been hosting stuff for clients since... phew, 2009? So LetsEncrypt was something I wanted to adopt pretty early, because back then certificate renewals were kind of annoying and often not free, but I also didn't want to load whatever the popular ACME client was at the time. Then this post popped up, and it was exactly what I had been looking for, and would have started using it soon after.
edit: my Linode account has been continuously active since October 2009, though it only has a few small legacy services on it now. I started that account specifically for hosting mail and web services for clients I had at the time. So, yeah, my memory seems accurate enough.
Sacrificing a version number segment as a permanent zero prefix to keep them away is the most practical way to appease semver's fans, given that they exist in numbers and make ill-conceived attempts to depend on semver's purported eldritch law-magics in tooling. It's a bit like the "Mozilla" in browser user-agents; I hope we can stop at one digit sacrificed, rather than ending up like user-agents did, though.
In other words, 0ver, unironically. Pray we do not need 0.0ver.
I'm not using nginx these days because of this.
That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.
I tried dokku (and still am!) and it is so hard getting started.
For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud
This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...
Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...
What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.
I'm trying to use dokku for my homeserver.
Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.
But I’m not going back. Nginx was a real pain to configure with so many puzzles and surprises and foot guns.
I figured either somehow Let's Encrypt doesn't work out, or, everybody bakes in ACME within 2-3 years. The idea that you can buy software in 2025 which has TLS encryption but expects you to go sort out the certificate. It's like if cars had to be refuelled periodically by taking them to a weird dedicated building which is not useful to anything else rather than just charging while you're asleep like a phone and... yeah you know what I get it now. You people are weird.
In this very thread some people complain that certbot uses snap for distribution. Imagine making a feature release and having to wait 1-2 years until your users will get it on a broad scale.
I looked at Arch and they're a version behind, which surprised me. Must not be a heavily maintained arch package.
Anyway, good luck staying competitive lol. Almost everyone I knew either jumped to something more saner or in process of migrating away.
https://en.angie.software/angie/docs/configuration/modules/h...
The original announcement of Angie ACME:
Angie, fork of Nginx, supports ACME - https://news.ycombinator.com/item?id=39838228 - March 27, 2024 (1 comment)
Per above, it looks like ACME support was released with Angie 1.5.0 on 2024-03-27.
BTW, if you don't care about ACME, and want the original nginx, then there's also the freenginx fork, too:
Freenginx: Core Nginx developer announces fork - https://news.ycombinator.com/item?id=39373327 - (1131 points) - Feb 14, 2024 (475 comments)
If you wanted to use LE though, you could use a more "traditional" cert renewal process somewhere out-of-band, and then provision the resulting keys/certs through whatever coordination thing you contrive (and HUP the nginxs)
Downside is obviously certificate maintenance increases, but ACME automated the vast majority of that work away.
Also does it make it easier for there to be alternatives to Let's Encrypt?
You can specify any ACME API base URL. It’s not just Let’s Encrypt.
Probably like many others here, I would very much like to see Cloudflare DNS support.
As you've said Debian 13 was released 4 days ago - it takes some time to spin up the infrastructure for a new OS (and we've been busy with other tasks, like getting nginx-acme and 1.29.1 out).
(I work for F5)
Having distinct tools for serving content and handling certs is not a problem, and nothing changes on this side. Moreover, the module won't cover every need.
BTW, cerbot is rather a "fat tool" compared to other acme tools like lego. I've had bad experiences with certbot in the past because it tried to do too much automatically and it's hard to diagnose – though I think certbot has been rewritten since then, since it has no more dependency on python zope.
I switched to Lego because it has out of the box support for my domain registrar so I could use DNS instead of HTTP challenge. It’s also a single go binary which is much simpler to install than certbot.
Caddy & Traefik did it long, long ago (half a decade ago), and after half a decade, we finally have ngxin supporting it too. Great move though, finally I won't have to manually run certbot :pray:
So Nginx is just about 9 to 10 years late. Lol
I need a tool to issue certs for a bunch of other services anyway, I don't really see how it became such a thing for people to want it embedded in their web server.
The concern is that the author failed to understand why his batshit-crazy intended behaviour was a bad design from the start.
The author did neither - he was steadfast that his approach was correct, and everyone else was wrong.
Someone references when you made an ass-backwards decision, and insisted you were correct; your immediate response is not any kind of explanation about how you learnt to trust other people's opinions, or even acknowledging that you got it wrong - you resort to petty childlike attempts at insult.
This served us well for many years before migrating to use Kamal [3] for its improved remote management features.
[1] https://docs.servicestack.net/ssh-docker-compose-deploment
I just pre-populate with a self-signed cert to start, though I'd have to check how to do that in docker.
This is all it takes to start a nginx server. Add this block and everything starts up perfectly first time, using proper systemd sandboxing, with a certificate provisioned, and with a systemd timer for autorenewing the cert. Delete the block, and it's like the server never existed, all of that gets torn down cleanly.
services.nginx = {
enable = true;
virtualHosts = {
"mydomain.com" = {
enableACME = true;
locations."/" = {
extraConfig = ''; # Config goes here
};
};
};
}
I recently wanted to create a shortcut domain for our wedding website, redirecting to the SaaS wedding provider. The above made that a literal 1 minute job.Run `certbot certonly` on the host once to get the initial certs, and choose the option to run a temporary server rather than using nginx. Then in `compose.yml` have a mapping from the host's certificates to the nginx container. That way, you don't have to touch your nginx config when setting up a new server.
You can then use a certbot container to do the renewals.
E.g.
nginx:
volumes:
- /etc/letsencrypt:/etc/letsencrypt
certbot:
volumes:
- /etc/letsencrypt:/etc/letsencrypt
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
In your nginx.conf you have ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
And also location /.well-known/ {
alias /usr/share/nginx/html/.well-known/;
}
For the renewals.Personally I use dns everywhere. I have a central server running dehydrated and dns challenges every night which then rsyncs to all the servers (I'm going to replace it with vault). I kind of like having one place to check for certs
I keep these things separate on the servers I configure:
- Setting up PKI related things like DH Params and certs (no Docker)
- My app (Docker)
- Reverse proxy / TLS / etc. with nginx (no Docker)
This allows configuring a server in a way where all nginx configuration works over HTTPS and the PKI bits will either use a self-signed certificate or certbot with DNS validation depending on what you're doing. It gets around all forms of chicken / egg problems and reduces a lot of complexity.Switching between self-signed, Let's Encrypt or 3rd party certs is a matter of updating 1 symlink since nginx is configured to read the destination. This makes things easy to test and adds a level of disaster recovery / reliability that helps me sleep at night.
This combo has been running strong since all of these tools were available. Before Let's Encrypt was available I did the same thing, except I used 3rd party certs.
That's when I found "golang.org/x/crypto/acme/autocert" and then I built a custom redirect server using it. It implements TLS-ALPN-01 which works fantastically with Let's Encrypt.
Now we can just add a domain to our web configuration, setup it's target and redirect style, and then push the configuration out the EC2 instance providing the public facing service. As soon as the first client makes a request, they're effectively put "on hold," while the server then arranges for the certificate in the background. As soon as it's issued and installed on the server the server continues with the original client.
It's an absolute breeze and it makes me utterly detest going backwards to DNS-01 or HTTP-01 challenges.
Looking forward to this. HTTP-01 already works well enough for me with certbot (which I need for other services anyway and gives me more control over having multiple domains in one cert) but for wildcard certs there are not as many good solutions.
Automating webroot is trivial and I would rather use an external rust utility to handle it than a module for nginx. I guess if you _only_ need certs for your website then this helps but I have certs for a lot of other things too, so I need an external utility anyway.
And no dns-01 support yet.
johnisgood•5mo ago
Caddy sounds interesting too, but I am afraid of switching because what I have works properly. :/
roywashere•5mo ago
KronisLV•5mo ago
But also hey, now we have built-in ACME support in all the mainstream web servers: Nginx, Caddy and Apache2! Ofc Caddy will be the most polished, since that is one of its main selling points.
orphea•5mo ago
bityard•5mo ago
When I tried using Caddy with something serious for the first time, I thought I was missing something. I thought, these docs must be incomplete, there has to be more to it, how does it know to do X based on Y, this is never going to work...
But it DID work. There IS almost nothing to it. You set literally the bare minimum of configuration you could possibly need, and Caddy figures out the rest and uses sane defaults. The docs are VERY good, there is a nice community around it.
If I had any complaint at all, it would be that the plugin system is slightly goofy.