The Brilliance of Linux

I’ve been a Linux user for many, many years. Going all the way back to Red Hat 5.2, which I picked up to install on an ancient Packard Bell 486 in the late 90s. Since then there’s always been at least one Linux machine in my dorm, apartment or house somewhere. At various times I’ve even run it for my desktop OS, although these days I use macOS for that.

For much of that time, Linux was the choice of hackers, but was definitely not a choice for everyday users and required a significant amount of technical knowledge to run. That’s not true so much anymore, but growing in that environment I learned a lot about how computers and operating systems work.

But every now and then, Linux does something that positively delights me.

I have a large fileserver in my closet that runs Plex digital media library among other things. This machine has an SSD for it’s primary drive, and a 5-disk Linux software RAID array. Because I am a part hoarder, while cleaning out my parts closet one day, I realized I had all the parts to build a near duplicate of this machine.

So I assembled the backup machine. On this ancient Athlon motherboard, the SATA ports weren’t labeled, so I just plugged the drives in what I thought was the right order, with /dev/sda being the OS drive, and /dev/sd(b-f) being the RAID drives. When installing Linux on it (Ubuntu 16.04), it detected them in a seemingly random order, so the OS drive was /dev/sdf and the rest of the drives were the array. Guess I got them reversed. Meh, whatever, I’ll just configure it that way.

So I took it to work to allow it to be an offsite backup in case tornadoes destroy my house. I set up rsync to replicate nightly between the two, and everything was good!

… except that ancient motherboard had some issues. Anytime it lost power for more than a few minutes, it would throw CMOS Checksum Errors on the BIOS. I figured this was a bad CMOS battery, so I replaced the battery, but no dice. And because this board is 10 years old, good luck getting support on it.

So over Black Friday, I scored a good deal on a new motherboard. Added in some memory and a power supply that doesn’t sound like a jet engine (how did I deal with this in 2006?) I had a totally new machine. I intended to just use the existing hard drives with the new machine.

So I took the old motherboard and power supply out, leaving just the drives, and installed the new one. On the newer motherboard, the SATA ports were actually marked, so I installed the SSD on port 0, then the drives in top-top-bottom order in the remaining ports.

I was fully prepared to reinstall, but I decided, what the heck, let’s boot this thing up and see what happens. So I turned it on.

And it booted Linux. With almost no problems. All the way to the login screen. And when I logged in, even the RAID array was there, in a clean state, completely ready. Literally the only thing that didn’t work was the network card, and once I ran dpkg-reconfigure on the linux-image and fixed the /etc/network/interfaces even the network card came up.

Mind = blown. I literally rebuilt this entire machine, and it came back up like nothing happened.

The disks coming up was the coolest part of all. What was /dev/sdf on the old machine was now /dev/sda, but both the system and the RAID array were aware of this change. And, after a bit of research, the reason I concluded was because both don’t use node numbers anymore, but UUIDs or another type of ID. A quick glace at /etc/fstab will show something like this:

# / was on /dev/sdf3 during installation
UUID=baa30826-d0dd-49e0-b3eb-21751accc1bf /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sdf1 during installation
UUID=078a7b64-0b84-4747-a07f-7bb74470385d /boot           ext4    defaults        0       2
# /mnt/storage was on /dev/md0 during installation
UUID=a988ff4c-1489-44d2-ac4d-bd24cdc70829 /mnt/storage    ext4    defaults        0       2
# swap was on /dev/sdf2 during installation
UUID=e22b8588-b9fb-422f-a479-d79e3ae50cba none            swap    sw              0       0

And some research seems to show that Linux mdraid uses metadata and not node numbers to assemble the array, so you can reassemble it in any order. Brilliant.

Did something I wrote help you out?

That's great! I don't earn any money from this site - I run no ads, sell no products and participate in no affiliate programs. I do this solely because it's fun; I enjoy writing and sharing what I learn.

All the same, if you found this article helpful and want to show your appreciation, here's my Amazon.com wishlist.

Read More

Internal Auto-Renewing LetsEncrypt Certificates

I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!

Incrementally Migrating from Apache to nginx

I am currently in the process of migrating a bunch of sites on this machine from Apache to nginx. Rather than take everything down and migrate it all at once, I wanted to do this incrementally. But that raises a question: how do you incrementally migrate site configs from one to the other on the same machine, since both servers will need to be running and listening on ports 80 and 443? The solution I came up with was to move Apache to different ports (8080 and 4443) and to set the default nginx config to be a reverse proxy!