Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin

Quick post on how to install Letsencrypt certificates into nginx without using the official plugin. There may be some cases where you don’t want to use the official plugin (which until recently was still marked as “experimental.”) The concepts here could be theoretically applied to any webserver software.

Basics of a ACME Challenge

Letsencrypt is based on a technology called ACME, which stands for Automated Certificate Management Environment. It’s a way for a certificate issuer to verify your ownership of a domain and issue you a certificate without requiring any manual intervention. And while there are a number of ways for this to happen, by far the most common is via a webserver.

The ACME client places a file in the /.well-known/acme-challenges/ directory. This will usually look something like /.well-known/acme-challenge/LYORRg3BLMyxa8_WYUa27QHofvO2M2GfvoPkLV5H-7I. The certificate issuer attempts to download this file. If it was successful, new certificates are generated, downloaded by the client, and installed in the correct location.

Obviously, this is a high-level overview, but the important thing to take away from this is that this is designd to be scripted. It is designed with zero interaction in mind.

The Restart Problem

Until the nginx plugin was stabilized, the common way to do this was to stop nginx, spin up a standalone server for the ACME challenge, then restart nginx. Obviously, this is not desirable in a production environment. Even a brief, less than 10 second outage it takes to do the ACME exchange is too long.

Fortunately, the letsencrypt client provides you another option --webroot.

Installing Certs without Restarts

Using a combination of some nginx config changes and the proper commands, we can do the challenge then tell nginx to just reload the configs, resulting in a zero downtime cert install.

First, you may need to make an nginx change. This is necessary if, for example, you are running nginx as a reverse proxy server and don’t have easy access to the remote end. Or if you just want to serve the requests from another location.

  location ^~ /.well-known {
    allow all;
    root /var/www/well-known/;
  }

Next, you just need to issue the right command to letsencrypt.

letsencrypt certonly --quiet -n --agree-tos --webroot -w /var/www/well-known/ --deploy-hook "systemctl reload nginx" -d example.com

Let’s take apart what we did here.

  • certonly obtains a certificate, but does not install it. This is a bit of a misnomer. It downloads the cert, but does not install it into your config. You will have to handle that. Also useful for specific certs.
  • -n Non interactive. We want to script this! :)
  • --quiet again, we want to script this in cron, so we don’t want it making noise. You should probably remove this while you are testing.
  • --agree-tos you agree to the terms of service.
  • --webroot this is the magic sauce. You’re telling letsencrypt that you want to use the webroot auth, placing files in a location on your server.
  • -w /var/www/well-known/ Tells webroot where to place the files. This is the same location as your config change above.
  • --deploy-hook "systemctl reload nginx" on a successful certificate download it will run this command. Note that it will only run this if a new certificate is downloaded, making this safe to run daily!
  • -d example.com is the domain you want the certificate for.

The systemctl reload nginx part there is important. Not only are we only running the command if we have new certs, we are telling nginx to reload, not restart. The server will continue to reply to existing requests, but new requests will be served with the new config, making this a zero downtime operation.

Finally, if you have not already done so, you will need to add the appropriate SSL configuration lines to your config:

  ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

Congratulations, you now have certs! And you can shove that command into a daily cronjob to be sure that you never have to deal with renewing an SSL certificate again.

Did something I wrote help you out?

That's great! I don't earn any money from this site - I run no ads, sell no products and participate in no affiliate programs. I do this solely because it's fun; I enjoy writing and sharing what I learn.

All the same, if you found this article helpful and want to show your appreciation, here's my Amazon.com wishlist.

Read More

Internal Auto-Renewing LetsEncrypt Certificates

I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!

Incrementally Migrating from Apache to nginx

I am currently in the process of migrating a bunch of sites on this machine from Apache to nginx. Rather than take everything down and migrate it all at once, I wanted to do this incrementally. But that raises a question: how do you incrementally migrate site configs from one to the other on the same machine, since both servers will need to be running and listening on ports 80 and 443? The solution I came up with was to move Apache to different ports (8080 and 4443) and to set the default nginx config to be a reverse proxy!