frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nginx introduces native support for ACME protocol

https://blog.nginx.org/blog/native-support-for-acme-protocol
314•phickey•4h ago•120 comments

PYX: The next step in Python packaging

https://astral.sh/pyx
87•the_mitsuhiko•1h ago•33 comments

Fuse is 95% cheaper and 10x faster than NFS

https://nilesh-agarwal.com/storage-in-cloud-for-llms-2/
24•agcat•51m ago•5 comments

OCaml as my primary language

https://xvw.lol/en/articles/why-ocaml.html
105•nukifw•2h ago•62 comments

FFmpeg 8.0 adds Whisper support

https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/13ce36fef98a3f4e6d8360c24d6b8434cbb8869b
676•rilawa•9h ago•252 comments

Pebble Time 2* Design Reveal

https://ericmigi.com/blog/pebble-time-2-design-reveal/
130•WhyNotHugo•5h ago•56 comments

Launch HN: Golpo (YC S25) – AI-generated explainer videos

https://video.golpoai.com/
31•skar01•2h ago•49 comments

Cross-Site Request Forgery

https://words.filippo.io/csrf/
40•tatersolid•2h ago•8 comments

So what's the difference between plotted and printed artwork?

https://lostpixels.io/writings/the-difference-between-plotted-and-printed-artwork
142•cosiiine•6h ago•50 comments

Coalton Playground: Type-Safe Lisp in the Browser

https://abacusnoir.com/2025/08/12/coalton-playground-type-safe-lisp-in-your-browser/
74•reikonomusha•5h ago•25 comments

DoubleAgents: Fine-Tuning LLMs for Covert Malicious Tool Calls

https://pub.aimind.so/doubleagents-fine-tuning-llms-for-covert-malicious-tool-calls-b8ff00bf513e
62•grumblemumble•6h ago•18 comments

ReadMe (YC W15) Is Hiring a Developer Experience PM

https://readme.com/careers#product-manager-developer-experience
1•gkoberger•3h ago

rerank-2.5 and rerank-2.5-lite: instruction-following rerankers

https://blog.voyageai.com/2025/08/11/rerank-2-5/
6•fzliu•1d ago•1 comments

The Mary Queen of Scots Channel Anamorphosis: A 3D Simulation

https://www.charlespetzold.com/blog/2025/05/Mary-Queen-of-Scots-Channel-Anamorphosis-A-3D-Simulation.html
60•warrenm•6h ago•13 comments

New treatment eliminates bladder cancer in 82% of patients

https://news.keckmedicine.org/new-treatment-eliminates-bladder-cancer-in-82-of-patients/
195•geox•4h ago•91 comments

This website is for humans

https://localghost.dev/blog/this-website-is-for-humans/
369•charles_f•4h ago•179 comments

How Silicon Valley can prove it is pro-family

https://www.thenewatlantis.com/publications/how-silicon-valley-can-prove-it-is-pro-family
8•jger15•1h ago•0 comments

April Fools 2014: The *Real* Test Driven Development (2014)

https://testing.googleblog.com/2014/04/the-real-test-driven-development.html
74•omot•2h ago•14 comments

OpenIndiana: Community-Driven Illumos Distribution

https://www.openindiana.org/
54•doener•4h ago•45 comments

Google Play Store Bans Wallets That Don't Have Banking License

https://www.therage.co/google-play-store-ban-wallets/
32•madars•1h ago•14 comments

We caught companies making it harder to delete your personal data online

https://themarkup.org/privacy/2025/08/12/we-caught-companies-making-it-harder-to-delete-your-data
217•amarcheschi•6h ago•52 comments

DeepKit Story: how $160M company killed EU trademark for a small OSS project

https://old.reddit.com/r/ExperiencedDevs/comments/1mopzhz/160m_vcbacked_company_just_killed_my_eu_trademark/
21•molszanski•57m ago•6 comments

29 years later, Settlers II gets Amiga release

https://gamingretro.co.uk/29-years-later-settlers-ii-finally-gets-amiga-release/
57•doener•1h ago•15 comments

A case study in bad hiring practice and how to fix it

https://www.tomkranz.com/blog1/a-case-study-in-bad-hiring-practice-and-how-to-fix-it
76•prestelpirate•3h ago•65 comments

Claude says “You're absolutely right!” about everything

https://github.com/anthropics/claude-code/issues/3382
525•pr337h4m•13h ago•414 comments

Job Listing Site Highlighting H-1B Positions So Americans Can Apply

https://www.newsweek.com/h1b-jobs-now-american-workers-green-cards-2041404
34•walterbell•1h ago•9 comments

Honky-Tonk Tokyo (2020)

https://www.afar.com/magazine/in-tokyo-japan-country-music-finds-an-audience
19•NaOH•4d ago•6 comments

PCIe 8.0 Announced by the PCI-Sig Will Double Throughput Again – ServeTheHome

https://www.servethehome.com/pcie-8-0-announced-by-the-pci-sig-will-double-throughput-again/
48•rbanffy•3d ago•54 comments

New downgrade attack can bypass FIDO auth in Microsoft Entra ID

https://www.bleepingcomputer.com/news/security/new-downgrade-attack-can-bypass-fido-auth-in-microsoft-entra-id/
7•mikece•39m ago•1 comments

Gartner's Grift Is About to Unravel

https://dx.tips/gartner
92•mooreds•4h ago•44 comments
Open in hackernews

Nginx introduces native support for ACME protocol

https://blog.nginx.org/blog/native-support-for-acme-protocol
312•phickey•4h ago

Comments

johnisgood•3h ago
For now I will stick to what works (nginx + certbot), but I will give this a try. Anyone tried it?

Caddy sounds interesting too, but I am afraid of switching because what I have works properly. :/

roywashere•3h ago
I like it!!! I am using Apache mod_md on Debian for personal project. That is working fine but when setting up a new site it somehow required two Apache restarts which is not super smooth
orphea•2h ago
Caddy has been great for me. I don't think you should switch if your current setup works but give it a try in a new project.
bityard•2h ago
I grew up on Apache and eventually became a wizard with its configuration and myriad options and failures modes. Later on, I got semi-comfortable with nginx which was a little simpler because it did less than Apache but you could still get a fairly complex configuration going if you're running weird legacy PHP apps for example.

When I tried using Caddy with something serious for the first time, I thought I was missing something. I thought, these docs must be incomplete, there has to be more to it, how does it know to do X based on Y, this is never going to work...

But it DID work. There IS almost nothing to it. You set literally the bare minimum of configuration you could possibly need, and Caddy figures out the rest and uses sane defaults. The docs are VERY good, there is a nice community around it.

If I had any complaint at all, it would be that the plugin system is slightly goofy.

dizhn•3h ago
This is pretty big. Caddy had this forever but not everybody wants to use caddy. It'll probably eat into the user share of software like Traefik.
elashri•3h ago
What I really like about Caddy is their better syntax. I actually use nginx (via nginx proxy manager) and Traefik but recently I did one project with Caddy and found it very nice. I might get the time to change my selfhosted setup to use Caddy in the future but probably will go with something like pangolin [1] because it provides alternative to cloudflare tunnels too.

[1] https://github.com/fosrl/pangolin

kstrauser•3h ago
I agree. That, and the sane defaults are almost always nearly perfect for me. Here is the entire configuration for a TLS-enabled HTTP/{1.1,2,3} static server:

  something.example.com {
    root * /var/www/something.example.com
    file_server
  }
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression:

  php.example.com {
    root * /var/www/wordpress
    encode
    php_fastcgi unix//run/php/php-version-fpm.sock
    file_server
  }
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for.
dizhn•3h ago
I checked out pangolin too recently but then I realized that I already have Authentik and using its embedded (go based) proxy I don't really need pangolin.
Saris•1h ago
Caddy does have some bizarre limitations I've run into, particularly logging with different permissions when it writes the file, so other processes like promtail can read the logs. With Caddy you cannot change them, it always writes with very restrictive permissions.

I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'

The other thing I really don't like is if you install via a package manager to get automated updates, you don't get any of the plugins. If you want plugins you have to build it yourself or use their build service, and you don't get automatic updates.

francislavoie•1h ago
Actually, you can set the permissions for log files now. See https://caddyserver.com/docs/caddyfile/directives/log#file
Saris•56m ago
Oh good to know!

Do you know if Caddy can self update or if is there some other easy method? Manually doing it to get the cloudflare plugin is a pain.

francislavoie•51m ago
No, you have to build Caddy with plugins. We provide xcaddy to make it easy. Sign up for notifications on github for releases, and just write yourself a tiny bash script to build the binary with xcaddy, and restart the service. You could potentially do a thing where you hook into apt to trigger your script after Caddy's deb package version changes, idk. But it's up to you to handle.
thrown-0825•3h ago
Definitely. I use traefik for some stuff at home and will likely swap it out now.
grim_io•3h ago
I configure traefik by defining a few docker labels on the services themselves. No way I'm going back to using the horrible huge nginx config.
tgv•2h ago
I switched over to caddy recently. Nginx' non-information about the http 1 desync problem drove me over. I'm not going to wait for something stupid to happen or an auditor ask me questions nginx doesn't answer.

Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.

Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.

dekobon•1h ago
I did a google search for the desync problem and found this page: https://my.f5.com/manage/s/article/K30341203

This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?

cobbzilla•3h ago
There’s a section on renewals but no description of how it works. Is there a background thread/process? Or is it request-driven? If request-driven, what about some hostname that’s (somehow) not seen traffic in >90 days?
adontz•3h ago
certbot has an plugin for nginx, so I'm not sure why people think is was hard to use LetsEncrypt with nginx.
orblivion•3h ago
From a quick look it seems like a command you use to reconfigure nginx? And that's separate from auto-renewing the cert, right?

Maybe not hard, but Caddy seems like even less to think about.

orblivion•3h ago
I guess I should compare to this new Nginx feature rather than Caddy. It seems like the benefit of this feature is that you don't have a tool to run, you have a config to put into place. So it's easier to deploy again if you move servers, and you don't have to think about making sure certbot is doing renewals.
creshal•3h ago
Certbot is a giant swiss army chainsaw that can do everything middlingly well, if you don't mind vibecoding your encryption intrastructure. But a clean solution it usually isn't.

(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)

jeroenhd•2h ago
Certbot always worked fine for me. It autodetects just about everything and takes care of just about everything, unless you manually instruct it what to do (i.e. re-use a specific CSR) and then it does what you tell it to do.

It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.

jddj•3h ago
From the seeming consensus I was dreading setting let's encrypt up on nginx, until I did it and it was and has been... Completely straightforward and painless.

Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.

9dev•2h ago
Certbot is a utility that can only be installed via snap. That crap won’t make it to our servers, and many other people view it the same way I do.

So this change is most welcome.

bityard•2h ago
Maybe it's better these days, but even as an experienced systems administrator, I found certbot _incredibly_ annoying to use in practice. They tried to make it easy and general-purpose for beginners to web hosting, but they did it with a lot of magic that does Weird Stuff to your host and server configuration. It probably works great if you're in an environment where you just install things via tarball, edit your config files with Nano, and then rarely ever touch the whole setup again.

But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.

do_not_redeem•3h ago
It looks like this isn't included by default with the base nginx, but requires you to install it as a separate module. Or am I wrong?

https://github.com/nginx/nginx-acme

bhaney•3h ago
Nginx itself is mostly just a collection of modules, and it's up to the one building/packaging the nginx distribution to decide what goes in it. By default, nginx doesn't even build the ssl or gzip modules (though thankfully it does build the http module by default). Historically it only had static modules, which needed to be enabled or disabled at compile time, but now it has dynamic modules that can be compiled separately and loaded at runtime. Some older static modules now have the option of being built as dynamic modules, and new modules that can be written as dynamic modules generally are. A distro can choose to package a new dynamic module in their base nginx package, as a separate package, or not at all.

In a typical distro, you would normally expect one or more virtual packages representing a profile (minimal, standard, full, etc) that depends on a package providing an nginx binary with every reasonable static-only module enabled, plus a number of separately packaged dynamic modules.

timw4mail•3h ago
Yes, that is correct.
Shank•3h ago
> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.

clvx•3h ago
But you have to have your dns api key loaded and many dns providers don’t allow api keys per zone. I do like it but a compromise could be awful.
grim_io•3h ago
Sounds like a DNS provider problem. Why would Nginx feel the need to compromise because of some 3rd party implementation detail?
toomuchtodo•42m ago
Because users would pick an alternative solution that meets their needs when they don't have leverage or ability to change DNS provider. Have to meet users where they are when they have options.
bananapub•3h ago
no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.
rglullis•2h ago
I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.

If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.

Arnavion•1h ago
You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.
aflukasz•1h ago
Yeah, but then you can just as well use http-01 with like same effort.
gruez•1h ago
no, because dns supports wildcard certificates, unlike http.
aflukasz•48m ago
Ah, good point.
hashworks•3h ago
If you host a hidden primary yourself you get that easily.
Sesse__•3h ago
Many DNS providers also don't support having an external primary.
nulbyte•1h ago
Do most of them let you add an NS record?
qwertox•56m ago
And if they don't, you might consider switching to Cloudflare for DNS hosting.
xiconfjs•3h ago
if even PowerDNS doesn‘t support it :(
ddtaylor•2h ago
It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.

I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.

qwertox•1h ago
You can make the NS record for the _acme-challenge.domain.tld point to another server which is under your control, that way you don't have to update the zone through your DNS hoster. That server then only needs to be able to resolve the challenges for those who query.
immibis•1h ago
General note: your DNS provider can be different from your registrar, even though most registrars are also providers, and you can be your own DNS provider. The registrar is who gets the domain name under your control, and the provider is who hosts the nameserver with your DNS records on it.
qwertox•57m ago
Yes, and you can be your own DNS provider only for the challenges, everything else can stay at your original DNS provider.
Spivak•3h ago
I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.
cortesoft•3h ago
My work is mostly running internal services that aren’t reachable from the external internet. DNS is the only option.

You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.

bryanlarsen•3h ago
> DNS is the only option

DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.

But they're the only sane options.

filleokus•2h ago
Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)

cortesoft•2h ago
Oh I totally misread the comment.

Nevermind, I agree!

Sharparam•1h ago
The comment is strangely worded, I too had to read it over a couple of times to understand what they meant.
cyberax•2h ago
One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.

It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).

bityard•2h ago
The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.
jeroenhd•2h ago
If you buy your domain with a bottom-of-the-barrel domain reseller and then not pay for decent DNS, you don't have the option.

Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.

creatonez•3h ago
Why would nginx ever need support for the DNS-01 challenge type? It always has access to `.well-known` because nginx is running an HTTP server for the entire lifecycle of the process, so you'd never need to use a lower level way of doing DV. And that seems to violate the principle of least privilege, since you now need a sensitive API token on the server.
lukeschlather•2h ago
Issuing a new certificate with the HTTP challenge pretty much requires you allow for 15 minutes of downtime. It's really not suitable for any customer-facing endpoint with SLAs.
kijin•2h ago
Only if you let certbot take down your normal nginx and occupy port 80 in standalone mode. Which it doesn't need to, if normal nginx can do the job by itself.

When I need to use the HTTP challenge, I always configure the web server in advance to serve /.well-known/ from a certain directory and point certbot at it with `certbot certonly --webroot-path`. No need to take down the normal web server. Graceful reload. Zero downtime. Works with any web server.

chrismorgan•2h ago
Sounds like you’re doing it wrong. I don’t know about this native support, but I’d be very surprised if it was worse than the old way, which could just have Certbot put files in a path NGINX was already serving (webroot method), and then when new certificates are done send a signal for NGINX to reload its config. There should never be any downtime.
kijin•2h ago
Certbot has a "standalone" mode that occupies port 80 and serves /.well-known/ by itself.

Whoever first recommended using that mode in anything other than some sort of emergency situation needs to be given a firm kick in the butt.

Certbot also has a mode that mangles your apache or nginx config files in an attempt to wire up certificates to your virtual hosts. Whoever wrote the nginx integration also needs a butt kick, it's terrible. I've helped a number of people fix their broken servers after certbot mangled their config files. Just because you're on a crusade to encrypt the web doesn't give you a right to mess with other programs' config files, that's not how Unix works!

jofla_net•15m ago
Also, whoever decided that service providers were no longer autonomous to determine the expiration times of their own infrastructure's certificates should get that boot-to-the-head as well.

It is not as if they couldn't already choose (to buy) such short lifetimes already.

Authoritarianism at its finest.

tomku•14m ago
Those choices and Certbot strongly encouraging snap installation was enough to get me to switch to https://go-acme.github.io/lego/, which I've been very happy with since. It's very stable and feels like it was built by people who actually operate servers.
Kwpolska•2h ago
Where would this downtime come from? Your setup is really badly configured if you need downtime to serve a new static file.
0x457•2h ago
Because while Nginx always has access to .well-known, thing that validates on issuer side might not. I use DNS challenge to issue certificates for domains that resolve to IPs in my overlay network.

The issue is that supporting dns-01 is just supporting dns-01 it's providing a common interface to interact with different providers that implement dns-01.

justusthane•2h ago
You can’t use HTTP-01 if the server running nginx isn’t accessible from the internet. DNS-01 works for that.
chrismorgan•2h ago
Wildcard certificates are probably the most important answer: they’re not available via HTTP challenge.
samgranieri•2h ago
I use dns01 in my homelab with step-ca with caddy. It's a joy to use
reactordev•2h ago
+1 for caddy. nginx is so 2007.
supriyo-biswas•2h ago
Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.
reactordev•2h ago
Yup. I can’t wait for the day I can kill my caddy8s service.

The best thing about caddy is the fact you can reload config, add sites, routes, without ever having to shutdown. Writing a service to keep your orchestration platform and your ingress in sync is meh. K8s has the events, DNS service has the src mesh records, you just need a way to tell caddy to send it to your backend.

The feature should be done soon but they need to ensure it works across K8s flavors.

01HNNWZ0MV43FF•1h ago
I think you can that with Nginx too, but the SWAG wrapper discourages it for some reason
pushrax•31m ago
just send sighup to nginx and it will reload all the config—there's very few settings that require a restart
reactordev•27m ago
Sure, how, from the container? The host it’s on? Caddy exposes this as an api.
darkwater•1h ago
Caddy is just for developers that want to publish/test the thing they write. For power users or infra admins, nginx is still much more valuable. And yes, I use Caddy in my home lab and it's nice and all but it's not really flexible as nginx is.
reactordev•43m ago
Caddy is in use here in production. 14M requests an hour.
j-krieger•33m ago
We use Caddy across hundreds of apps with 10s of millions of requests per day in production.
chaz6•2h ago
One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.
kijin•2h ago
A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.

It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.

sureglymop•37m ago
That is true and it is annoying. They should really just support RFC 2136 instead of building their own APIs. Lego also supports this and pretty much all DNS servers have it implemented. At least I can use it with my own DNS server...

https://datatracker.ietf.org/doc/html/rfc2136

attentive•2h ago
Yes, ACME-DNS please - https://github.com/joohoi/acme-dns

Lego supports it.

altairprime•1h ago
Does DNS-01 support DNS-over-HTTPS to the registered domain name servers? If so, then it should be extremely simple to extend nginx to support DNS claims; if not, perhaps DNS-01 needs improvements.
uncleJoe•1h ago
no need to wait: https://en.angie.software/angie/docs/configuration/modules/h...

(angie is the nginx fork lead by original nginx developers that left f5)

aoe6721•1h ago
Switch to Angie then. It supports DNS-01 very well.
aorth•3h ago
Oh this is exciting! Caddy's support is very convenient and it does a lot of other stuff right out of the box which is great.

One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.

stego-tech•2h ago
The IT Roller Coaster in two reactions:

> Nginx Introduces Native Support for Acme Protocol

IT: “It’s about fucking time!”

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.”

Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.

0xbadcafebee•2h ago
> allowing internally-signed certificates to be valid via said Intermediary

By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.

stego-tech•1h ago
> If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

This is where I get rankled.

In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.

Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”

The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.

If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.

everfrustrated•14m ago
Intermediates aren't a delegation mechanism as such. They're a way to navigate to the roots trust.

The trust is always in the root itself.

It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.

andrewmcwatters•2h ago
It seems like if you commit your NGINX config with these updates, you can have one less process to your deployment if you're doing something like:

    # https://certbot.eff.org/instructions?ws=other&os=ubuntufocal
    sudo apt-get -y install certbot
    # sudo certbot certonly --standalone
    
    ...
    
    # https://certbot.eff.org/docs/using.html#where-are-my-certificates
    # sudo chmod -R 0755 /etc/letsencrypt/{live,archive}

So, unfortunately, this support still seems more involved than using certbot, but at least one less manual step is required.

Example from https://github.com/andrewmcwattersandco/bootstrap-express

thaumaturgy•2h ago
Good to see this. For those that weren't aware, there's been a low-effort solution with https://github.com/dehydrated-io/dehydrated, combined with a pretty simple couple of lines in your vhost config:

    location ^~ /.well-known/acme-challenge/ {
        alias <path-to-your-acme-challenge-directory>;
    }
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.
andrewmcwatters•2h ago
This is really cool, but I find projects that have thousands of people depending on it not cutting a stable release really distasteful.

Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.

Dehydrated has been major version 0 for 7 years, it's probably past due.

See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)

CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."

SemVer: "If your software is being used in production, it should probably already be 1.0.0."

https://0ver.org/about.html

nothrabannosir•2h ago
Distasteful by whom, the people depending on it? Surely not… the people providing free software at no charge, as is? Surely not…

Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?

ygjb•2h ago
That's the great thing about open source. If you are not satisfied with the free labour's pace of implementing a feature you want, you can do it yourself!
andrewmcwatters•2h ago
Yes, absolutely! I would probably just pick a version to fork, set it to v1.0.0 for your org's production path, and then you'd know the behavior would never change.

You could then merge updates back from upstream.

john01dav•2h ago
It's generally easier to just deal with breaking changes, since writing code is faster than gaining understanding and breaking changes in the external api are generally much better documented than internals.
dspillett•2h ago
Feel free to provide and support a "stable" branch/fork that meets your standards.

Be the change you want to see!

Edit to comment on the edit:

> Edit: Downvote me all you want

I don't generally downvote, but if I were going to I would not need your permission :)

> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

I assume you meant "present" there rather than "consume"?

Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.

Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".

thaumaturgy•2h ago
FWIW I have been using and relying on Dehydrated to handle LetsEncrypt automation for something like 10 years, at least. I think there was one production-breaking change in that time, and to the best of my recollection, it wasn't a Dehydrated-specific issue, it was a change to the ACME protocol. I remember the resolution for that being super easy, just a matter of updating the Dehydrated client and touching a config file.

It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.

samgranieri•2h ago
This is a good first start. One less moving part. They should match caddy for feature parity on this, and also add dns01 challenges as well.

I'm not using nginx these days because of this.

ankit84•2h ago
We have been using Caddy for many years now. Picked just because it has automatic cert provisioning. Caddy is really an easier alternative, secure out of the box.
josegonzalez•2h ago
This is great. Dokku (of which I am the maintainer) has a hokey solution for this with our letsencrypt plugin, but thats caused a slew of random issues for users. Nginx sometimes gets "stuck" reloading and then can't find the endpoint for some reason. The fewer moving knobs, the better.

That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.

ctxc•2h ago
Hey! Great to see you here.

I tried dokku (and still am!) and it is so hard getting started.

For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud

This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...

Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...

What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.

I'm trying to use dokku for my homeserver.

Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.

miggy•2h ago
It seems HAProxy also added ACME/DNS-01 challenge support in haproxy-3.3-dev6 very recently. https://www.mail-archive.com/haproxy@formilux.org/msg46035.h...
RagnarD•2h ago
After discovering Caddy, I don't use Nginx any longer. Just a much better development experience.
andrewstuart•2h ago
It was this that sent me from nginx to caddy.

But I’m not going back. Nginx was a real pain to configure with so many puzzles and surprises and foot guns.

tialaramex•2h ago
It's good to see this, it surprised me that this didn't happen to basically everything, basically immediately.

I figured either somehow Let's Encrypt doesn't work out, or, everybody bakes in ACME within 2-3 years. The idea that you can buy software in 2025 which has TLS encryption but expects you to go sort out the certificate. It's like if cars had to be refuelled periodically by taking them to a weird dedicated building which is not useful to anything else rather than just charging while you're asleep like a phone and... yeah you know what I get it now. You people are weird.

zaik•1h ago
Is there a way to notify other services, if renewal has succeed? My XMPP server also needs to use the certificate.
smarx007•1h ago
When will this land in mainline distros (no PPAs etc)? Given that a new stable version of Debian was released very recently, I would imagine August 2027 for Debian and maybe April 2026 for Ubuntu?

In this very thread some people complain that certbot uses snap for distribution. Imagine making a feature release and having to wait 1-2 years until your users will get it on a broad scale.

Saris•1h ago
I assume they're complaining that it's a snap vs flatpak, not so much vs the distro package repos.
giancarlostoro•1h ago
Nginx maintains their own repository from which you can install nginx on your Ubuntu / Debian systems.

I looked at Arch and they're a version behind, which surprised me. Must not be a heavily maintained arch package.

thway15269037•1h ago
Does nginx still lock prometheus metrics and active probing behind $$$$$ (literal hundreds of thousands)? Forgot third most important thing. I think is was re-resolving upstreams.

Anyway, good luck staying competitive lol. Almost everyone I knew either jumped to something more saner or in process of migrating away.

aoe6721•1h ago
It was introduced long time ago in Angie fork with much better support.
ugh123•38m ago
How does something like this work for a fleet of edge services, load balancing in distinct areas, but all share a certificate. Does each nginx instance go through the same protocol/setup steps?
philsnow•34m ago
You'd get rate limited pretty hard by Let's Encrypt, but if you're rolling your own acme servers you could do it this way.

If you wanted to use LE though, you could use a more "traditional" cert renewal process somewhere out-of-band, and then provision the resulting keys/certs through whatever coordination thing you contrive (and HUP the nginxs)

placatedmayhem•34m ago
They don't need to share a single cert. Multiple certificates can be, and possibly should, issued for the same address (or set of addresses). This means that one front door server that gets popped doesn't expose all connections to the larger service.

Downside is obviously certificate maintenance increases, but ACME automated the vast majority of that work away.

burnt-resistor•37m ago
Yeah, I don't want my webserver to turn into systemd and changing certificates. This is excessive functionality for something that should be handled elsewhere and drive the coordination of rolling certs.
ilaksh•27m ago
Just to check, this means we can use some extra lines in the nginx configuration as an alternative to installing and running certbot, right?

Also does it make it easier for there to be alternatives to Let's Encrypt?

ExoticPearTree•25m ago
It is a start. Maybe this will serve as a proof of concept that it can be done and then other protocols could be implemented.

Probably like many others here, I would very much like to see Cloudflare DNS support.

idoubtit•10m ago
A little mistake with this release: they packaged the ngx_http_acme_module for many Linux distributions, but "forgot" Debian stable. Oldstable and oldoldstable are listed in https://nginx.org/en/linux_packages.html (packages built today) but Debian 13 Trixie (released 4 days ago) is not there.
triknomeister•4m ago
That's Debian's fault I guess