Making Native WebDAV Actually Work on nginx with Finder and Explorer

So my long march away from Apache has been coming to an end, and I am finally migrating some of the more esoteric parts of my Apache setup to nginx. I have a side domain that I use to share files with some friends and, for ease of use, I have configured it with WebDAV so that they can simply mount it using Finder or Explorer, just like a shared drive.

The problem? nginx’s WebDAV support … sucks.

First, the ngx_http_dav_module module is not included in most distributions from the package managers. Even the ones that are, it’s usually pretty out of date. And, perhaps worst of all, it is a partial implementation of WebDAV. It doesn’t support some of the things (PROPFIND, OPTIONS, LOCK, and UNLOCK) that are needed to work with modern clients.

So what can we do?

We Need More Modules

First, we need to add a second WebDAV module. In this case, nginx-dav-ext-module, which provides you with additional support for locking and additional methods. But before you go and just download this module from your repo, the one that is in the Ubuntu repositories is way out of date and doesn’t include locking. Without locking, WebDAV will not work in Finder. You can browse and download, but not upload, which kind of defeats the purpose of doing this.

You will also need a module called headers-more-nginx-module, which I will go into why later.

We’re also going to install ngx-fancyindex because the usual nginx index pages are really boring.

But unfortunately, most of this config is not available from the standard repositories. That means we’ll need to build from source.

Building From Source

First, be sure that you have uninstalled existing nginx installations that may have come from a package archive.

Next, fetch the sources for nginx and the various modules we’ll need. Like a good sysadmin, I use /usr/src for this.

$ cd /usr/src
$ wget https://nginx.org/download/nginx-1.19.0.tar.gz
$ wget https://github.com/arut/nginx-dav-ext-module/archive/v3.0.0.tar.gz
$ mv v3.0.0.tar.gz nginx-dav-ext-module-v3.0.0.tar.gz
$ wget wget https://github.com/aperezdc/ngx-fancyindex/archive/v0.4.4.tar.gz
$ mv v0.4.4.tar.gz ngx-fancyindex-v0.4.4.tar.gz
$ wget https://github.com/openresty/headers-more-nginx-module/archive/v0.33.zip
$ mv v0.33.tar.gz headers-more-nginx-module-v0.33.tar.gz
$ gunzip *.gz
$ tar -xf *.tar

We’ll also need to install some prerequisites:

$ apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support automake libssl-dev libpcre3-dev zlib1g-dev libxslt1-dev

Now, we’re going to build all this in one fell swoop.

$ cd /usr/src/nginx-1.19.0/
$ ./configure --prefix=/etc/nginx \
            --sbin-path=/usr/sbin/nginx \
            --modules-path=/usr/lib/nginx/modules \
            --conf-path=/etc/nginx/nginx.conf \
            --error-log-path=/var/log/nginx/error.log \
            --pid-path=/var/run/nginx.pid \
            --lock-path=/var/run/nginx.lock \
            --user=nginx \
            --group=nginx \
            --build=Ubuntu \
            --builddir=nginx-1.19.0 \
            --with-select_module \
            --with-poll_module \
            --with-threads \
            --with-file-aio \
            --with-http_ssl_module \
            --with-http_v2_module \
            --with-http_realip_module \
            --with-http_addition_module \
            --with-http_xslt_module=dynamic \
            --with-http_image_filter_module=dynamic \
            --with-http_geoip_module=dynamic \
            --with-http_sub_module \
            --with-http_dav_module \
            --with-http_flv_module \
            --with-http_mp4_module \
            --with-http_gunzip_module \
            --with-http_gzip_static_module \
            --with-http_auth_request_module \
            --with-http_random_index_module \
            --with-http_secure_link_module \
            --with-http_degradation_module \
            --with-http_slice_module \
            --with-http_stub_status_module \
            --with-http_perl_module=dynamic \
            --with-perl_modules_path=/usr/share/perl/5.26.1 \
            --with-perl=/usr/bin/perl \
            --http-log-path=/var/log/nginx/access.log \
            --http-client-body-temp-path=/var/cache/nginx/client_temp \
            --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
            --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
            --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
            --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
            --with-mail=dynamic \
            --with-mail_ssl_module \
            --with-stream=dynamic \
            --with-stream_ssl_module \
            --with-stream_realip_module \
            --with-stream_geoip_module=dynamic \
            --with-stream_ssl_preread_module \
            --with-compat \
            --with-pcre \
            --with-pcre-jit \
            --with-openssl-opt=no-nextprotoneg \
            --with-debug \
            --add-module=../nginx-dav-ext-module-3.0.0 \
            --add-module=../nginx-fancyindex-0.4.4 \
            --add-module=../headers-more-nginx-module-0.33

This builds nginx with all of our common stuff plus the modules we just downloaded (note the --add-modules at the end). It also uses the system libraries for PCRE, OpenSSL and zlib so no need to go and download those separately.

So now, time to compile and install:

$ make -j4
$ sudo make install

Compile is quite fast, taking less than a minute on a DigitalOcean VM. So now, you should have a working nginx installation. To test it with your exisitng configs, you can run:

$ nginx -t

You may have to recompile if you are using some other usual modules, but this got it for me.

Configuring WebDAV

So now it’s time to configure WebDAV. So, to start, we will start with a basic example that “kinda” works and we’ll build from there.

http {
    dav_ext_lock_zone zone=foo:10m;

    server {
        location / {
            root /data/www;

            dav_methods PUT DELETE MKCOL COPY MOVE;
            dav_ext_methods PROPFIND OPTIONS LOCK UNLOCK;
            dav_ext_lock zone=foo;
            create_full_put_path on;
        }
    }
}

Now, try to mount this in Finder. It probably works! You may even be able to upload some things. But try to create a folder and it fail. You’ll probably get an error along the lines of “The operation could not be completed (error code -43).”

So what’s going on? Well, to understand that, you’ll need to dig a bit into the way WebDAV works. But to summarise, the problem that Finder (and probably others) are non-compliant and don’t send a trailing slash when dealing with folters.

When you create using a WebDAV folder, it sends a MKCOL request with the path. But the path doesn’t have a slash on it, so nginx’s WebDAV client throws an error. Fortunately, this is easy to fix with some nginx config magic:

if ($request_method = MKCOL) {
    rewrite ^(.*[^/])$ $1/ break;
}

if (-d $request_filename) {
    rewrite ^(.*[^/])$ $1/ break;
}

So what we’re doing here is checking to see if the MKCOL request is to create a folder. If that’s the case we add a slash to the end of the request so that it is created properly. We also check to see if the request is for a folder and add the slash to it.

So now you can create folders fine and everything works! Until it doesn’t. But try to rename the folder and watch what happens. When you move a file, a WebDAV sends a MOVE HTTP request with the Destination header to a full destination. The problem is that when a non-compliant client sends a MOVE (or COPY request), they (sigh) don’t send a trailing slash. So now we have to get really creative and rewrite the Destination header before it’s passed off to nginx’s WebDAV extension.

This is where headers-more-nginx-module comes in. And this is what I eventually came up with through much trial and error.

set $destination $http_destination;
set $parse "";
if ($request_method = MOVE) {
    set $parse "${parse}M";
}

if ($request_method = COPY) {
    set $parse "${parse}M";
}

if (-d $request_filename) {
    rewrite ^(.*[^/])$ $1/ break;
    set $parse "${parse}D";
}

if ($destination ~ ^https://dav.example.com/(.*)$) {
    set $ob $1;
    set $parse "${parse}R${ob}";
}

if ($parse ~ ^MDR(.*[^/])$) {
    set $mvpath $1;
    set $destination "https://dav.example.com/${mvpath}/";
    more_set_input_headers "Destination: $destination";
}

So what we’re doing here is complicated and made difficult by the limitations of nginx. In nginx, you can’t have multiple-condition conditionals and you can’t nest conditionals. So what we do is build a variable based on each condition, then do a final comparison based on the variable we got. If we have a match, we use more_set_input_headers to rewrite the destination to have the trailing slash.

With this is place, everything works! Finder and Explorer works. You can upload delete and do everything you would expect a WebDAV client to work.

Finally, if you are going to be handling uploads of a large size, you may want to tweak some additional settings:

send_timeout 3600;
client_body_timeout 3600;
keepalive_timeout 3600;
lingering_timeout 3600;
client_max_body_size 10G;

You can tweak these values for what works for you.

Bonus: Stopping Finder’s Garbage

Finder likes to pollute shared drives with a lot of extra files. We really don’t want those being stored in our WebDAV instance. So we’ll use nginx to block them out:

location ~ \.(_.*|DS_Store|Spotlight-V100|TemporaryItems|Trashes|hidden|localized)$ {
    access_log  off;
    error_log   off;

    if ($request_method = PUT) {
        return 403;
    }
    return 404;
}

location ~ \.metadata_never_index$ {
    return 200 "Don't index this drive, Finder!";
}

Conclusions

nginx is a pretty good server. I much prefer it’s configuration format amount other things and it feels very flexible. But this particular corner could probably use some work. For one, it would be nice to see nginx-dav-ext-module be merged into ngx_http_dav_module so that you don’t need both. Also, non- compliant clients such as macOS’s Finder and Windows Explorer should be supported natively without having to resort to hacks. Yes, those clients aren’t adhering to the RFC, but they are also the two largest and most widely used WebDAV clients out there so they de-facto set the RFC.

This is a proof of concept that you can do this in nginx without to scripting. But, barring the above being fixed, I would probably implement this in Lua instead. Lua is natively supported in nginx and all these hacks could be much more sussinctly accomplished in a very small Lua script and it would probably be much more clear what is happening.

Did something I wrote help you out?

That's great! I don't earn any money from this site - I run no ads, sell no products and participate in no affiliate programs. I do this solely because it's fun; I enjoy writing and sharing what I learn.

COVID-19 has taken the world by storm and left a lot of brokenness in its wake. A lot of people are suffering. If you feel so inclined, please make a donation to your local food bank or medical charity. Order take-out from your local Chinese restaurant. Help buy groceries for an unemployed friend. Help people make it through to the other side.

But if you found this article helpful and you really feel like donating to me specifically, you can do so below.

Read More

Securing Home Assistant Alexa Integration

One of the big missing pieces from my conversion to Home Assistant was Amazon Alexa integration. It wasn’t something we used a lot, but it was a nice to have. Especially for walking out a room and saying “Alexa, turn off the living room lights.” I had been putting it off a bit because the setup instructions are rather complex. But this weekend I found myself with a couple free hours and decided to work through it. It actually wasn’t as difficult as I expected it to be, but it is definitely not the type of thing a beginner or someone who does not have some programming and sysadmin background could accomplish. But in working through it, there was one thing that was an immediate red flag for me: the need to expose your Home Assistant installation to the Internet. It makes sense that you would need to do this - the Amazon mothership needs to send data to you to take an action after all. But exposing my entire home automation system to the Internet seems like a really, really bad idea. So in doing this, rather than expose port 443 on my router to the Internet and open my entire home to a Shodan attack, I decided to try something a bit different.

Internal Auto-Renewing LetsEncrypt Certificates

I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!