Archive

2022

Ramblings

On Changes

It’s amazing how quickly time can fly when you are having fun. Almost fifteen years ago I started working at DealNews as a Junior Developer. I was in my mid 20s, less than two years out of Auburn. I even remember it was mid November because I left my previous job on a Wednesday, went to the Auburn-Georgia Game, then started at DealNews the following Monday. It was just before Black Friday even. I still even remember what that first day was like: I didn’t have SVN access yet and I had to email my code to my boss! To give you an idea of how long ago this was: when I was hired on at DealNews, I announced it to my friends on my MySpace page and my LiveJournal blog. Neither of which exist anymore. Fifteen years is a long time in tech, where changing jobs rapidly is the norm and staying in a position for three years can be seen as a serious commitment to a company. But the only constant in the universe is change. Which is why it is definitely very bittersweet for me to announce that I will be leaving DealNews on September 16, 2022.
Read More
Release Announcements

petfeedd 1.0.1 Released

petfeedd users, I am proud to announce the beta release of petfeedd 1.0.1. This release has no major changes in it and is solely about addressing security issues in many of the underlying libraries used by petfeedd. To install it or upgrade from previous versions, you can simply run: docker pull peckrob/petfeedd:latest
Read More
Ramblings

Stop Asking Me About Guest Articles

I am getting this request more and more often - to the tune of multiple emails a week at this point. It usually starts friendly enough - friendly enough to that I know the sender isn’t a robot, they’ve very clearly looked at some of my pages. But then the pitch starts: “I’d like to contribute to your website an article on X” or “I’d be delighted to contribute to your website on this topic.” Usually promising to do so for free.
Read More
Asterisk

Creating a Home Intercom System Using Asterisk and Cheap Used Phones

Recently, when my company was moving offices, I had the opportunity to snag a dozen or so used Polycom telephones. Had this idea that I wanted to try and it turned out that it worked pretty well. And that idea was this: what if I could use them to create an intercom system in the house?
Read More
What I Use

What I Use: 2022

Since it’s been a good six years since I did one of these, here’s what I am using in the year 2022 as far as tech and tech-adjacent things.
Read More
Release Announcements

petfeedd Version 1.0 Now Available

After five beta releases and months of testing, I am happy to announce petfeedd Version 1.0 is now available. All changes from the beta branch have been merged in and the release is now available on Docker Hub. To install it or upgrade from Version 0.2, you can simply run: docker pull peckrob/petfeedd:latest And restart. It should perform all the upgrades needed for version 1.0.
Read More
Release Announcements

Dystill Version 0.3 Now Available

Twelve years ago I wrote a little program called Dystill. It is a filtering mail delivery agent that could sort and filter email based on rules stored in a MySQL database. At the time I wrote it, I was transitioning away from using Gmail to running my own mail server, and I needed a way to filter my incoming mail into folders (akin to Gmails labels and automatic filtering) with the ability to quickly add rules without having to manually edit files. And for twelve years, that little program has just run reliably in the background with very few updates. The last time I changed it was 2012. In the meantime, the world has moved on and Python 2 (which it was written in) is no longer supported. And truthfully it was the last piece of Python 2 code in my whole setup. But I had been punting on updating it because it worked.
Read More
Ramblings

Some Thoughts on Ukraine

This is just sort of a stream of consciousness, so apologies if it doesn’t make a lot of sense. I still remember the first time I realized I was directly talking with someone in another country. It was the mid 90s and I was a teenager, hooked on playing MUDs. When most people in my high school could barely turn a computer on, I felt like a wizard who knew about an entire secret world, and it was awesome. I was playing, every day, with people from Scotland, Denmark, Italy, Australia, New Zealand, and so many others I can’t even remember now. And we talked. I learned so much about other cultures just by talking directly to people. And I remember thinking, in my own young, idealistic naivete, that if just everyone could be online, and could have these experiences, we might actually achieve world peace in my lifetime. We could see that we are all human bothers and sisters, separated only by artificially drawn borders. I believed free information will result in the most educated population in human history. And the Internet would bring the whole world a new age. I look back on myself then and mourn the world that we could have had. Humans apparently just aren’t ready for world peace and togetherness.
Read More
Release Announcements

petfeedd Version 1.0 Beta Now Available

petfeedd users, I am proud to announce the beta release of petfeedd 1.0. It’s been almost three years since the last release of petfeedd (version 0.2.2), and Version 1.0 marks a new start for this project. I have been running the beta release on my feeders for the last week and I believe I have smashed all the major bugs.
Read More

2021

Linux

Creating a Multiboot USB Stick under macOS

Here’s a quick article about how to make a multiboot USB stick under macOS. These are useful in a lot of situations - such as for doing system installs or system rescues - because you can boot a wide variety of live OSs from a single stick. There are a lot of guides out there for doing this on Linux, and a lot of software for automating it on Windows, but not a lot of guides for doing it on macOS. Fortunately, it is pretty straightforward as the instructions will be broadly similar to doing it on Linux.
Read More
Home Assistant

Hacking a Z-Wave Door Sensor Into a Mailbox Sensor for Home Assistant

My mailbox - yes, my physical mailbox where I receive actual mail - is one of the things that has stubbornly resisted my attempts to automate it. I’ve tried a few different solutions. Third party proprietary chimes. A Z-Wave tilt sensor on the door. But nothing has worked long-term.
Read More
Ramblings

It's Not The Schools

Some things are as reliable as clockwork. The moon and tides. Death and taxes. Politicians lying. And out-of-touch Silicon Valley tech millionaires and billionaires descending from their gold-plated PCB thrones to bestow upon us us, the unwashed masses, their most brilliant wisdom and thoughts. Today’s myopic missive is brought to you by Sam Altman, of Y-Combinator fame. On Sunday, he opened up Twitter and blessed us with this thought in the middle of an otherwise interesting thread:
Read More
Ramblings

You Need To Pay Better

So the good news is that things are stating to get better. The pandemic is starting to abate now that vaccines are widely available in the United States. Hopefully they will continue to be effective against the new strains that are emerging, and all evidence suggests that they are. Hopefully things will continue to improve around the world as well. Also equally good news is, with the pandemic abating, we can start to return to a more normal state. But many of us are emerging into a new world, one where it is basically impossible to buy a house because demand for houses is outpacing supply and where the costs of many things are going up due to scarcity. One of the interesting things I have noticed is that some businesses, and this seems to be predominantly fast food and restaurants, are having a hard time hiring people. Some have even shut down because they can’t find employees. What is happening here?
Read More
pfSense

Using Realtek NICs in pfSense

In the year 2021 there are a lot of things that you just take for granted. Remember when you used to have to use jumpers to set things on your computer? Or worrying about IRQ conflicts? Or whether you could get the the drivers you needed to work? These are all parts of the “bad old days” of computers that I don’t miss very much. These days if I plug things into my computer - any of them - I expect them to “just work.” And very often, surprisingly, this is the case. Especially common, well supported things like network cards. So it is notable when I encounter something where that isn’t the case. But first, let’s back up a little bit.
Read More
AppleScript

Templated Mail Replies in macOS Mail.app

So one of the downsides to corporate life can be dealing with the deluge of email. While Slack is the new hotness for communicating inside companies, when dealing with outside people or organizations email is still the lingua franca of communication. But the downside to that is that you sometimes have to deal with repetitive emails. One in particular I have noticed over the last few years being more and more common is people reaching out to me wanting to get content on DealNews, or in some other way work with our marketing or business development teams. It is starting to get so common that I get it several times a month, and the reply is always the same: I don’t have editorial control over what content appears on the website, please reach out to these web addresses. But typing this out every time is annoying. There should be a way to automate this. After all, anything worth doing twice is worth automating.
Read More

2020

petfeedd

petfeedd With Multiple Servos

I’ve had several people write me recently and ask about how to use petfeedd with multiple servos. It’s actually now my most common feature request, so I will definitely be sure that that is added in the rewrite that I am currently working on. In the meantime, you can run multiple instances of petfeedd using Docker, each pointed to a different servo. I would be sure to offset each servo by a few seconds to be sure you don’t have any voltage drop issues with the Raspberry Pi.
Read More
Hardware

Remotely Controlling a DeLonghi Oil Radiator using Home Assistant, ESPHome and ESP32

So here we are in October. COVID-19 is still with us and I am still working from home. Meanwhile, summer has quickly changed to autumn. The leaves are falling as are the temperatures. My house was the model home for our neighborhood, and what would have been the garage was finished in and used as a sales office. So when we bought the house, I was like, perfect, a perfect spot for an office! But the problem is that, because it was a garage, it’s not connected to the house’s HVAC system. In the summer there is a mini-split that keeps the whole area cool. But it’s kind of loud. However, I do have some of these DeLonghi Oil Radiators to use in the winter which provide abundant, silent heat without using very much power. But the downside is that they take awhile to warm up. Wouldn’t it be cool if I could have them turn on an hour early and “pre-warm” the office? Well, to get the obvious part out of the way, yes, there is timer functionality, but that is not nearly as cool as tying it into the rest of my smart home. But it has a remote. What if I could find a way to use Home Assistant to send IR commands to the heater? Turns out you can!
Read More
nginx

Proxying CUPS IPP using nginx

So I have this older Dell laser printer, a B1160w. It was released back in 2012, but it is a totally fine home printer for when I occasionally need to print something and it still works great after all these years, so I see no compelling reason to buy a new one. But there’s a problem: macOS support. Namely, no drivers have been released for macOS since 2017. Starting with Catalina, Apple started requiring code signing for executables, and the official Dell driver has an executable in it that refuses to execute because it isn’t signed. And despite my best efforts, short of turning off Gatekeeper entirely, I was not able to get it to work. But the printer itself is fine; there is absolutely no reason to create additional electronic waste purely for software reasons. But thanks to open-source software, we have another options: CUPS.
Read More
Parenthood

Creating a Safe Kids Network with pfSense, Unifi and NextDNS

Well, here we are five months later and COVID-19 is still a thing. And like many parents we are facing the need to continue our daughter’s education at home. Our local school district has stated that all learning will be conducted online for at least the first nine weeks. And even if they allow for students to return, we will probably opt to keep her at home for awhile longer until things are more stable. Now, our daughter is seven and will be turning eight in a couple months. So she’s at that age where she’s old enough to do some things independently. But, as most of us know, the Internet is not a safe place for a seven year old and we as parents need to exercise some level of control over the things they can access. And while the best solution is a set of eyes, we obviously can’t be everywhere at all times. So this is the solution I came up with.
Read More
nginx

Making Native WebDAV Actually Work on nginx with Finder and Explorer

So my long march away from Apache has been coming to an end, and I am finally migrating some of the more esoteric parts of my Apache setup to nginx. I have a side domain that I use to share files with some friends and, for ease of use, I have configured it with WebDAV so that they can simply mount it using Finder or Explorer, just like a shared drive. The problem? nginx’s WebDAV support … sucks. First, the ngx_http_dav_module module is not included in most distributions from the package managers. Even the ones that are, it’s usually pretty out of date. And, perhaps worst of all, it is a partial implementation of WebDAV. It doesn’t support some of the things (PROPFIND, OPTIONS, LOCK, and UNLOCK) that are needed to work with modern clients. So what can we do?
Read More
Home Assistant

Securing Home Assistant Alexa Integration

One of the big missing pieces from my conversion to Home Assistant was Amazon Alexa integration. It wasn’t something we used a lot, but it was a nice to have. Especially for walking out a room and saying “Alexa, turn off the living room lights.” I had been putting it off a bit because the setup instructions are rather complex. But this weekend I found myself with a couple free hours and decided to work through it. It actually wasn’t as difficult as I expected it to be, but it is definitely not the type of thing a beginner or someone who does not have some programming and sysadmin background could accomplish. But in working through it, there was one thing that was an immediate red flag for me: the need to expose your Home Assistant installation to the Internet. It makes sense that you would need to do this - the Amazon mothership needs to send data to you to take an action after all. But exposing my entire home automation system to the Internet seems like a really, really bad idea. So in doing this, rather than expose port 443 on my router to the Internet and open my entire home to a Shodan attack, I decided to try something a bit different.
Read More
Ramblings

We Want To Build!

Yesterday, Marc Andreessen, one of the more influential Silicon Valley investors, dropped an essay on the Andreessen-Horowitz blog called It’s Time To Build. I read it with a sense of bemusement because, like most things that come out of wealthy elites, and especially wealthy coastal elites (and especially wealthy Silicon Valley elites), it is filled with the myopia that can only come from spending far too much time in a bubble disconnected from what’s going on in the rest of the world. In short, the main thesis of his essay is that we’ve stopped building “things,” which, in this context is housing and medical devices but can more broadly be interpreted as a loss of civilizational inertia, because we stopped “wanting them.”
Read More
VueJS

Using Vue Single-File Components Inside Shadow DOM

Let’s say you’re building a tiny little Vue app. Not a full-on single page app, but something very tiny that will need to be embedded into other pages. Like a fully interactive widget that can do a wide variety of things, but will need to be self-contained so as not to interfere the rest of the page. Traditionally, in the past, we did this with a wide variety of approaches. Going back to the 90s, we use Java Applets (remember those?) and Active-X controls (ugh). We used Flash too (double ugh). Lately the preferred approach has been iframes, and while this is still a perfectly valid approach, it has it’s own set of problems. But now, we also have Shadow DOM which provides us another approach to building richly interactive widgets that are (mostly) contained from interfering with the styling of the surrounding page and, crucially, doesn’t allow the surrounding page to interfere with the widget! And, yes, Vue can totally be used inside a shadow tree. It just take a bit of setup work.
Read More
Coronavirus

Mourning

There was a great article that was recently posted by the Harvard Business Review that I think bears some very important consideration by everyone. Stress is easy to identify, and we are all certainly stressed. The predictability of our daily lives has been interrupted. Many of us have lost jobs, faced furloughs or pay cuts. Our kids are home from school. We’re worried about our families catching this disease, and ourselves as well. We’re all stuck together in this purgatory of waiting for this crisis to play itself out with no idea of what kind of world waits for us on the other side. We know that this will end - all pandemics eventually do - but we’re going to emerge from our shelters into a changed world. My wife and I have spent the last couple of weekends cleaning out closets. It kind of feels like rearranging the deck chairs on the Titanic at times, but it also keeps my mind occupied for the most part and keeps it from going into pretty dark places. And hey, my closet is now the cleanest it’s been since we moved. But every so often my mind ends up going there anyways. Such as from seeing a pile of T-shirts.
Read More
Coronavirus

Some Thoughts On COVID-19

If you ask people over a certain age, they can always tell you where they were when they found out about 9/11. I was a sophomore at Auburn, and my first class that day was at like 1pm, so I enjoyed the great collegiate tradition of sleeping in. Usually when I wake up the first thing I do is check my email. It’s still the first thing I do. That morning my inbox was full with messages on the fraternity mailing list, with things like “pray, a lot of people are dying today.” I turned on the TV just minutes before the first tower collapsed. Stayed glued to the TV the rest of the day. News coverage was on every channel, even Discovery Channel. Class was cancelled. I went and filled up my car in case I needed to drive the 250 miles back home to Tennessee. That evening I was in the SGA office in Foy Student Union folding thousands of little yellow ribbons for a very hastily organized memorial service on Samford lawn a few days later. We listened to President Bush’s speech on a small boombox in the office. I feel like I have been living that day over and over again for the last two weeks.
Read More
Ramblings

Some Work From Home Tips For Your COVID-19 Isolation

I’ve been working from home occasionally for probably close to ten years now, and full-time for the last few months. Thanks to the COVID-19 pandemic, many more people are now getting to enjoy (I guess?) the privilege of working from their homes during the crisis. If there is one thing that I hope comes out of this whole miserable period it is the understanding that there are a lot of people out there have jobs that really don’t need physical presence in an office building. And if they don’t need to be in an office, maybe they don’t need to live in an expensive city either. This could be the beginning of a whole new boom for small and mid-sized cities with affordable costs of living. Maybe you can afford a house after all! And maybe companies don’t need to lease out an expensive building in an expensive city, fill it to the brim with people in open floor plans or (even worse) hot-desking to do the work they need to do. It’s an even bigger win for disabled and non-neurotypical people who often struggle to work in the modern knowledge workforce despite their skills. For people with autism, ADHD, and other related conditions, modern open offices or cubicles are a difficult work environment whereas the home environment may offer much more safety and control. If this is your first time doing this, it may seem a bit odd, even naughty, to be working without commuting to an office building. With that in mind, I’ve put together a list of things I have observed over the years of working from home to help you get a feel for what this is like.
Read More
Home Assistant

Migrating from SmartThings to Home Assistant

I have been a SmartThings user for many years. The orginal reason was that, when we bought our current house in 2012, I wanted to turn the eave lights on at sunset and off a few hours later. After a short attempt to use Wifi-based Wemo switches, I settled on SmartThings and GE Z-Wave switches. I was so happy with it that I started putting them in more places. I added Kwikset SmartCode keypad locks and door sensors. I added more switches, like to turn on the garage overhead lights when the doors opened. I added sensors to monitor the temperature in the closet where I keep my server. And for many years this setup worked great. But over the last year, and especially since Samsung acquired SmartThings, I have become increasingly disillusioned with the SmartThings ecosystem. This last week, my disillusionment and frustration finally boiled over, and I migrated to a new platform. So why did I abandon SmartThings?
Read More
PHP

Using Phinx Programmatically

Phinx is a really cool database migration package that allows you to write changes to your database as code. It keeps track of which changes have been applied and allows you the option of rolling back if you hit an issue. All the documentation on Phinx describes a typical setup where you would run the phinx command to do your migrations. And that is all fine and good in most projects. But what happens if you are integrating Phinx into an existing project that already has a lot of the usual scaffolding in place?
Read More

2019

Release Announcements

petfeedd 0.2.2 Released

petfeedd, the software for pet feeders, has a new release. 0.2.2 is a maintenance release that add support for new Raspberry Pi Hardware. There are no breaking changes in this release.
Read More
Python

Archiving A Yahoo Group

I’ve been on the Internet a long time, since the early to mid 1990s. And when you are on the Internet that long, you tend to leave a pretty long trail behind you. But over the years that trail gets overgrown as sites close, lists vanish, and machines crash. There is precious little left from those early years. One thing that has persisted to this time, despite being pretty heavily neglected over the years, is Yahoo Groups. Those who remember the first dot-com boom may remember that Yahoo Groups was not originally Yahoo Groups. It was eGroups, which Yahoo bought and merged into their own sprawling empire. eGroups basically made it possible for anyone to set up a mailing list without needing access to a listserv service. Well, it looks like the end has finally come for Yahoo Groups. Verizon, the new owner of the rotting corpse of Yahoo, has announced that all groups will disappear on December 14th. I was on tons of mailing lists during my early Internet years, and I would really like to archive and preserve those messages if I could. But how could I get them out of Yahoo?
Read More
macOS

Solving CSSMERR_TP_CERT_EXPIRED error on OS X Installation

I have an old iMac that has been sitting unused upstairs for awhile, that I decided to finally get rid of. Before putting it on Craigslist, like any good computer owner, I wiped the drives and went to reinstall the most recent version of OS X/macOS that this old machine would support. In this case, this was El Capitan. But when I went to install, I kept getting an error about the OS not being able to install. Popping the log window open, I found an entry called CSSMERR_TP_CERT_EXPIRED. This would seem to be a prime suspect.
Read More
macOS

Better Sparkle Appcasts With Jekyll

If you have done and OS X/macOS development, especially any that predated the Mac App Store, you are probably aware of Sparkle. Even if you haven’t done any development, you have probably used Sparkle because it was basically the de facto method of providing update functionality in Mac Apps, and even to this day is still widely used on many apps distributed outside the official App Store. Updates are distributed to applications by means of an “appcast”, an extension of the RSS specification containing information about updates. RSS itself is based on XML, which means you can build them just like you would build any other published document. The problem comes when you start having a lot of updates in an appcast. Maintaining a large file can become difficult. But fortunately, using Jekyll collections, we can generate a single appcast using multiple files that are much easier to maintain. And, as an added bonus, we can use that same data to generate a download and changelog page from the same data.
Read More
nginx

Internal Auto-Renewing LetsEncrypt Certificates

I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!
Read More
Ramblings

A Fresh New Look

Welcome to the new, freshly redesigned robpeck.com! It’s amazing how you can become used to a design. It becomes like a warm coat. You love the predictability, you spent a lot of time getting the fonts right, getting the layour right, and everything is just perfect. That was the case with this site, that was pretty much exactly how it was way back when I migrated the site from Wordpress to Jekyll in 2013. To put that into perspective, my daughter was not even a year old yet. Barack Obama was just one year into his second term, the iPhone 5S had just dropped a month earlier, the first 4K TVs were shown off at CES. A long time has passed. And then the years pass. New devices and browsers appear. New technologies become available, and cruft builds up. In this case, a simple task of “I need to add a box to the site so that people will quit trying to use the comments for tech support and go to Github instead” became a full scale burn it down and start again redesign. So, aside from the new design, what else has changed?
Read More
Apache

Incrementally Migrating from Apache to nginx

I am currently in the process of migrating a bunch of sites on this machine from Apache to nginx. Rather than take everything down and migrate it all at once, I wanted to do this incrementally. But that raises a question: how do you incrementally migrate site configs from one to the other on the same machine, since both servers will need to be running and listening on ports 80 and 443? The solution I came up with was to move Apache to different ports (8080 and 4443) and to set the default nginx config to be a reverse proxy!
Read More
Ramblings

Just Take The Train

I love flying. Always have. Ever since my first flight as a kid, there was just something magical about getting into a giant metal bird and taking to the sky. I say was because it seems like, especially over the last decade or so, we have gone out of our way to make flying as miserable an experience as possible. The “golden age” of air travel is long behind us and flying is now just a completely miserable experience. And it pains me to say that because I used to love flying. I loved airports, watching planes, feeling the potential of all the places you could go. But now, it is just an objectively awful experience.
Read More
Swift

Hierarchies: Finding Parents, Children and Descendents using Swift

It usually doesn’t take beginning macOS/iOS developers long to discover NotificationCenter and see it as the solution to every single problem of passing data around to different controllers. And NotificationCenter is great, but it has some downsides. Notably, it is very easy to introduce retain cycles (and memory leaks) unless you are very careful to track and free the listener when the object is released. This has bitten me on several occasions. In general, excessive use of NotificationCenter ends up creating a difficult to maintain app where it is not entirely clear what is responding to what and where.
Read More
Swift

Creating Traits or Mixins in Swift

Object oriented programming is great, but sometimes things don’t fit neatly into a superclass/subclass hierarchy. You may have a piece of code that would be needed in several contexts, but for technical reasons beyond your control you cannot merge them into a single hierarchy. Some languages have the concept of multiple inheritence, where a subclass can specifically inherit from several parents. But this has it’s own set of problems. Many other languages, however, solve this through the use of traits or mixins. These allow us to have a set of methods that are basically copied into the object at compile time. This way they can be used anywhere they are needed. Swift doesn’t have the concept of mixins or traits per se. But, starting with Swift 3, you can get very equivalent functionality using protocol default implementations.
Read More
Swift

Debugging the Responder Chain in Swift

Somewhat related to my previous post about responder chains, sometimes it is useful to be able to debug what all is in the responder chain at any given time. As a good rule of thumb, all ancestor views of a view are in that view’s responder chain, as well as (usually) the related controllers.
Read More
Swift

The Responder Chain: Bubbling Events using NSResponder and UIResponder in Swift

The responder chain is one of those parts of macOS and iOS development that may seem a little strange if you have not done any GUI programming before. Briefly, a responder chain is a hierarichy of objects that can respond to events. So, for example, a click or a tap might be passed up the responder chain until something responds to the action. But, the responder chain is more than just UI events. We can pass our own custom events up the responder chain as well!
Read More
Swift

Sequential Chained Requests with Siesta and Swift

Siesta is a framework for Swift that dramatically simplifies working with RESTful APIs. And like many things in Swift, it is natively built around asynchronous execution. It may fire any number of requests back, and they may complete in any order that is undefined. But sometimes, you need to execute things in a specific order. Like when the result of one call will change subsequent calls. A classic example of this is an API where you might need to create a folder first, then upload files into the folder you created. So the folder creation needs to happen first, then the file uploads can happen after.
Read More
MySQL

Recursive Queries with MySQL

Discovered something neat with the new version of MySQL and thought it warranted a mention. Storing tree structures in a relational database is a common use case across many different areas of tech. The problem comes when you need to construct a query based on a subset of that tree. But MySQL 8 has some nice new features that makes doing this a breeze.
Read More
Javascript

Renaming Grunt NPM Tasks

For the last few years, Gulp has been my go-to task runner for Node projects and, generally, anywhere where I need to build things or run tasks. But the recent release of Gulp 4 broke all of my config files and left me with hours of frustrating rewrites, I decided to see what else might be out there. And, naturally, I landed on Grunt. One thing I liked about Gulp (prior to 4.0) was it’s much looser structure that allowed a lot of freedom in how you structured your file. Grunt seems to be much more structured and opinionated. And sometimes, I don’t like those opinions. A prime example of this is grunt-contrib-watch. When I type grunt watch, I want to run a series of setup tasks first before firing the watcher up. But grunt-contrib-watch squats on the prime real estate that is the watch command. But I wanted to use that command. And there doesn’t seem to be any way to just say “run these arbitrary tasks before starting the watcher.” At least not one that I could find clearly documented. Sure, I could just make my own mywatch or similar command, but I’m picky. I want my command, so we need a way to rename it.
Read More
PHP

Monitoring for Filesystem Changes using PHP and Laravel

Let’s say you have a Laravel application that does some data processing, and you want to monitor a directory for incoming changes, that you can then process using queued jobs. There are a couple of ways you could do something like this. You could scan those directories on a schedule using a cronjob. It’s doable. But what happens if you want to monitor a few thousand directories for changes? You can use tools like incron. Also doable, but another dependency. But what if I told you you could do it all with PHP. And within Laravel, no less?
Read More
Apple

The 2018 MacBook Pro Sucks

I’ve been an Apple fan for a long time. My first laptop was a Powerbook 5300cs, purchased secondhand at the Auburn University Surplus Auction. I’ve been using Apple equipment exclusively since 2007. My desktops and laptops are all Apple, I use AppleTVs exclusively for streaming, I carry iPhones and iPads. If it has a shiny Apple logo on it, I’ve probably bought one. So it pains me to write this post, but… The 2018 MacBook Pro sucks. There. I said it.
Read More
Release Announcements

petfeedd 0.2 released, with Docker support!

petfeedd, the daemon I wrote for my Raspberry Pi-powered cat feeders has been updated to fix a number of bugs people were seeing attempting to install it since I originally wrote it in 2017. Perhaps the biggest change is Docker support! That’s right, if you just want to run petfeedd, now you can do it in just three commands! No more installing various libraries and things (but that approach still works as well.)
Read More

2018

Release Announcements

New Open Source Code

Launched two new pieces of open source code in the last couple of months. PlayerControls PlayerControls is a macOS Cocoa framework that creates a View containing playback controls for media like videos or sounds. It is written in pure Swift 4 and has no dependencies. SearchParser SearchParser is a parser that converts a freeform query into an intermediate object, that can then be converted to query many backends (SQL, ElasticSearch, etc). It includes translators for SQL (using PDO) and Laravel Eloquent ORM. It supports a faceted language search as commonly found on many sites across the web. It is written in modern PHP. Both are licensed under the MIT license. Go check them out on Github.
Read More
PHP

Building Meaningful Video Thumbnails Using FFMPEG and PHP

Working on doing some upgrades for one of my clients and I hit on an idea. He has a lot of videos available, but each one only has a static image as a thumbnail, taken at a set point in the video (by default; the owner or and admin can go in and recreate the thumbnail at a different time point if they want.) But what if, instead, we could create an animated GIF composed of several frames from the video? From a user’s perspective, a single frame might not tell you a lot about a video. But ten frames taken over the course of the whole video can tell you a lot more about the video than the single frame would. How would we implement something like that?
Read More
cars

Customizing Screens on a Toyota Entune Infotainment System

So after twelve years driving an ultra-reliable 2006 Toyota Tacoma, I decided it was finally time to upgrade. So, of course, what else to buy … but a 2018 Toyota Tacoma. :) Things have really changed in twelve years, and where my old truck originally came with a simple CD player head unit (that I later upgraded to a Clarion CX-501, primarily because I wanted Bluetooth), my new truck has this fancy touchscreen entertainment system that has mountains of options and can even show me weather radar while driving! So I was exploring around inside the menus last night and I discovered that you can, theoretically, set custom images as your startup and “screen off” images. But, unfortunately, the details of how to do this are buried somewhere in a SEVEN HUNDRED PAGE owners manual with a very thin index. Ain’t nobody got time for that. So I googled around and found some answers on forum threads, and decided to write a post on how to do this to raise the visibility of it some.
Read More
Digitization

Digitizing Two Hundred Cassette Tapes

Like most people who grew up in the 80s and 90s, I had a pretty large collection of cassette tapes. But probably unlike a lot of people, I’ve managed to hang onto them, or at least a lot of them, over the years. This big box of tapes has traveled with me through probably a dozen moves over the years, and it’s always been in the back of my head, “I should probably digitize them.” The thing was, I had a bunch of cassette albums, sure, but most of them I eventually replaced with CDs and later ripped to MP3s. But I had a ton more of mix tapes and TV recordings.
Read More
Linux

Backing Up and Rotating MySQL Databases the Easy Way

Here’s a little quickie for you. Say you have a small MySQL server floating around your house that you want to have regular backups of. You do want regular backups right? In my case, the biggest motivation was wanting a regular way to grab a recent MySQL dump of an internal tool I use at home to develop against. After poking around the Internet a bit, I was surprised that, other than mysqldump itself, there doesn’t seem to be a simple tool out there that you can slam into a cronjob and let it do it’s thing. So, like any good hacker, I decided to brew my own. After all, when you have 256,428 different solutions, why not make solution 256,429? :)
Read More
nginx

Securing static resources with cookies, nginx, and Lua

I’ve been working with one of my clients the last month on migrating his iron- based architecture to a cloud-based provider. In this transition, we are going from one or two physical servers to multiple cloud servers and separating out parts to better scale each individual service. As part of this, we are moving a significant library of images and videos away from being served off the same web server as the application and to a server tuned to handle requests for these static assets. The problem is that a lot of these assets (the videos and full-size images) are for paying members only. We need a way to secure those resources across physical servers.
Read More
Hammerspoon

Wallpaper Swapping with Hammerspoon

Hammerspoon is a pretty nifty tool. It’s kind of difficult to explain what it does, but the best I can do is that it allows you to use Lua to script actions on your Mac and, crucially, respond to events. For instance, I use Hammerspoon to lauch all my applications when I get to work and lay them out on the screen in the order that I like. I can do this because I was able to attach a location listener to work’s location, and execute Lua code on arrival. The amount of things that you can do with this tool is pretty stunning. It’s become an indespensible part of my macOS experience.
Read More

2017

Security

That Time I Became A 76-Year-Old New Yorker

Or, what happens when you send an email to the wrong place. Note, for the time being, I have redacted the names of the company and doctor involved as I am attempting to follow through with a responsible disclosure process for this security issue. I had something very strange happen to me today. It all began with a random email to an address that I don’t use much anymore, but still have in Mail. It’s an email account I’ve had for over 13 years, so it still gets the occasional stray email. The subject read as follows: Welcome To <redacted> Patient Portal Followed in quick succession by another one: Welcome To <redacted> Patient Portal Wait, what?
Read More
Linux

The Brilliance of Linux

I’ve been a Linux user for many, many years. Going all the way back to Red Hat 5.2, which I picked up to install on an ancient Packard Bell 486 in the late 90s. Since then there’s always been at least one Linux machine in my dorm, apartment or house somewhere. At various times I’ve even run it for my desktop OS, although these days I use macOS for that. For much of that time, Linux was the choice of hackers, but was definitely not a choice for everyday users and required a significant amount of technical knowledge to run. That’s not true so much anymore, but growing in that environment I learned a lot about how computers and operating systems work.
Read More
Raspberry Pi

Using the DS3231 RTC (Real Time Clock) with Raspberry Pi

In my last post about building the pet feeders, I alluded to one of the limitations of the Raspberry Pi has: it lacks a real time clock. This is an understandable omission. They take up extra space and cost, are not needed for a lot of applications and can be pretty easily added if they are. One of the limitations I found is that, if there is a power outage that lasts a significant amount of time - long enough for the UPS batteries that keep the wireless up go dead, for instance - that the Raspberry Pi’s may “lose” track of time if they can’t reconnect to wifi and, thus, sync up by NTP.
Read More
Python

Rob's Raspberry Pi Powered Pet Feeders

As with many of my projects, it started with something that made me angry. In this case, it was this: The Petmate Le Bistro Pet Feeder. Okay, let’s back up a little bit. Back to about 8 or so years ago. We had a cat at the time, Pumpkin, who as objectively not a good cat. She was foul tempered on the best of days and very difficult to love. But she was my wife and I’s first pet, so we did love her all the same. She had a habit of wanting food precisely on time. And if it was late, she would raise all manner of noise until she was fed. Often this came at some ungodly early time in the morning. So I bought a Petmate Le Bistro Pet Feeder.
Read More
Python

Using Pipenv with Systemd

So I’ve been doing a bit of Python recently for a project I’m working on on a Raspberry Pi. There will be a longer blog post about that in the next few weeks. But one thing I ran up against was that I wanted to start my daemon, written in Python, using a systemd service on Raspbian. Normally, you would just shove a script invocation into a systemd unit and call it good, but in my case I had made use of Pipenv, which is a bit like Bundler in the Ruby world and Composer in the PHP world, to manage my project’s dependencies.
Read More
Space

The 2017 Total Solar Eclipse

Sometime in the mid 90s, I downloaded an astronomy program for my computer. I don’t even remember what it was called. In poking around on it, I discovered that it could plot future total solar eclipses and that one would pass, from the resolution of the map, very close to where I then lived in eastern Tennessee. The date was August 21st, 2017.
Read More
Release Announcements

Collecting Unifi controller data with collectd

As you can tell from the last few posts, I’ve been having a lot of fun with collectd and instrumenting my systems. But I had one glaring hole until recently: my Ubiquti Unifi AP access points. Well no longer!
Read More
Linux

Options Have Meanings, or, How I Made an rsync Seven Times Faster

Warning: Doing this is making a clear tradeoff between security and speed. Do not do this on the public Internet or across a network you do not trust. rsync is one of those tools that is in every computer user’s toolkit. It’s fantastic for moving large amounts of data around and for migrating data from one system to another. rsync also has a ton of options and, after awhile, you get to where muscle memory means you just type the same few options over and over again. With me, that was -avz, archive, verbose, compression. Recently, I was migrating several terabytes of data from a NAS to a computer. As is often the case, I fired up an rsync job and watched it. It maxed out at about 35 megabit. Across a gigabit switched internal network.
Read More
Release Announcements

Harvesting Nest Thermostat Data For Fun And Profit

Okay, no profit in this, but it certainly is fun! I have two Nest thermostats in my house and, after some teething pains (yay the life of an early adopter) they have been pretty solid. But they’re also black boxes that I know little about. I know they’re collecting mountains of data and sending it back to the Google mothership. Wouldn’t it be nice to get at some of that data and build my own reports?
Read More
collectd

More collectd and pfSense Fun!

Extending my post from last year, here’s some additional data I’m grabbing from pfSense and stuffing into collectd via a script. I’m now grabbing: DHCP Leases CPU Temperature Thermal Zone Temperature SSD Drive Temperature UPS information (via NUT) Here’s the exec script:
Read More
Javascript

Extending ngResource To Access Metadata

AngularJS’s built-in ngResource is a great tool for natively supporting REST APIs in your Angular application. But what happens when you need to support something besides a simple call that retrieves a list of JSON objects? You quickly run into the limits of ngResource. Here’s a great case where you might need to do something more complex: paging. Say you want to get a list of objects, and there’s 10,000 or so of them. You don’t want to send 10,000 objects to your frontend app. You want to send a portion of them, but you still need to indicate to the app that there are more. Surprisingly, considering how widespread this pattern is in web development, there does not seem to be a native way to accomplish this. But you can extend ngResource. Here’s how I did it.
Read More

2016

What I Use

What I use: 2016

Since it’s been awhile since I wrote a post about what I use in regards to software, hardware, etc. Perhaps it’s time that I did that again. So here’s a list of what I’m using in 2016:
Read More
pfSense

Collecting Data From pfSense Using collectd

So I’ve recently been on a graphing thing, wanting to collect all kind of data from my home network. And collectd seems to be a good candidate for doing that. With a huge number of plugins, it can collect and send just about anything you can think of to a time series database (I’m using InfluxDB for this). But, there’s a significant hole in my data collection: my pfSense firewall. Well, not anymore!
Read More
MySQL

Finding Multi-byte Characters in MySQL Fields

So I was recently helping a client with an issue in MySQL where a migration failed to transfer the full contents of some fields. This amounted to a little over 1% of the total messages transferred. In doing some research, we discovered that the one thing every message had in common was the presence of multi-byte (high unicode) characters. In many cases, this was due to a user pasting some text from Microsoft Word.
Read More
Facebook

Why I'm (Almost) Quitting Facebook

So this is something I’ve been meaning to write for awhile now. It’s time we had a talk about Facebook. I was an early adopter of Facebook. Not as early as some, but before it was open to everyone. Back when it was a social network for college students and you needed a .edu address to join. Back when you still had to choose a “network” (I think I’m still technically in the Auburn network somewhere buried down in my settings somewhere). I’ve been on Facebook probably more than 10 years at this point. But now, I’ve finally decided to call it (mostly) quits on Facebook.
Read More
Javascript

Creating a simple predicate builder with AngularJS

So I’ve been working on a project recently where I needed a simple predicate builder. Basically I needed a way to allow users to build a somewhat complex search using a GUI. And since we are using AngularJS on this project, here’s a quick article about how I did it.
Read More
Mac

Multiple Calibre Servers under Mac OS X

So there’s this program out there called Calibre which, despite it’s pretty terrible UI, is pretty much the gold standard for managing eBooks. Seriously, it’s such a great program whose only fault is its terrible engineer UI. One of the nice things that Calibre includes is a built-in web server that can serve books via OPDS. If you have an OPDS-compatible reader (I use Marvin), you can browse and download from your library directly on your device, basically creating your own private eBook cloud. But, this presents a little bit of an issue. Namely, I don’t want all of my books to be publicly available, while still providing a subset of my library for visitors to browse and use. But I still want to be able to access them myself from my “private reserve collection.” Fortunately, with a little bit of work, you can do that under Calibre.
Read More
Apache

Pretty URLs - Serving Plex from behind a proxy using mod_proxy and Apache

I’m obsessed with pretty URLs. I admit it. I love looking at a properly formatted URL that just looks nice. I’m slowly converting our internal media library over to Plex now that it is available on the new AppleTV. In doing that, I noticed that the Plex web interface serves, by default, serves from port 32400. So the URL ends up looking somthing like this: http://172.16.104.4:32400/web/index.html Twitch.
Read More

2015

PHP

Securely Signing PHP Phar Files With OpenSSL

PHP’s PHAR archives (PHp ARchives, get it?) are a neat development. They’re a way to distribute an entire PHP application as a single archived file that can be executed directly by the PHP intepreter without unarchving them before execution. They’re broadly equivalent to Java’s JAR files and they’re super useful for writing small utilities in PHP.
Read More
Ramblings

An Open Letter to Chief Justice Roy Moore

Those of my longtime readers will know that I very rarely if ever mention anything on this blog other than my ramblings on tech. But today is a very different day and I feel compelled to write about this. So I’ll ask for a mulligan. And, as always, my views here do not represent anything or anyone other than me.
Read More
pfSense

Scheduled Throttling with pfSense

Apple has launched a new Photos App for OS X, along with the ability to upload your entire library to iCloud. And with prices that are so cheap, there’s almost no reason not to. $3.99 a month is cheap insurance to know that every photo I’ve ever taken of my family won’t be wiped out in a tornado. But with this comes a problem - namely, how do you upload a 150 gigabytes of photos over a 5 megabit network connection? Well, you wait a really long time for it to upload. Which is fine, really, because I’m not in any particular hurry to finish. But, once I started the upload, I noticed that surfing the web became pretty much impossible because the upload to iCloud was saturating my upstream bandwidth.
Read More
Apple

NSHTMLTextDocumentType is Slow

So I was confronted with an interesting bug this week, and I wanted to share it with everyone so maybe it will save you some time. Put simply, NSAttributedString with NSHTMLTextDocumentType is slow. Dog slow. So obscenely slow that it should probably never, ever be used.
Read More
Release Announcements

Responsive CSS3 Columns with Sass and Bootstrap

Impatient? Scroll to the bottom to download. So I recently was working on a site and wanted to use CSS3 columns. But I really like how the grid system works in Bootstrap, and wanted to be able to define columns in a similar way (i.e. have different number of columns depending on the screen size). Not finding any pre-cooked versions, I decided to write my own. Strictly speaking, you don’t need Bootstrap for this to work. But I did re-use Bootstrap’s grid variables so that it breaks along the same lines that Bootstrap’s grid does. It’s also worth noting that, natively, the columns will collapse on their own if you specify a width. This method just gives you a bit more control.
Read More
HOWTO

Installing the Ubiquiti UniFi Controller Software on pfSense 2.2

Stop! Do Not Do This! I am leaving this here for the reference and posterity, but for a variety of reasons, I no longer recommend doing this. It is a neat hack, but tends to be a bit of a pain to live with as you end up having to troubleshoot or reinstall it every time you update pfSense or Unifi. When you can install it on a Raspberry Pi for less than $50, there's really no need to do this.I personally have switched to running this on a stock Ubuntu system that runs a few other network services in my house. This is a short tutorial on how to install the Ubiquiti Networks’ UniFi Enterprise Wifi controller software on pfSense 2.2. These directions are derived from these directions for 2.1-RC, but have been updated to work on 2.2. Note that this is a somewhat advanced tutorial. If you are not comfortable working in a Unix command line or editing system files, this is probably not the best thing you could do. But I’m putting it out here in case it will help others.
Read More
News

Design Tweaks and New Content!

So I’ve tweaked the design of the blog a bit. There’s now a header, and all the links that were in the sidebar are now in the header. There were simply too many things being added and it was getting unwieldy having them all in the sidebar. It’s now fully responsive on all levels of mobile, tablet included! I’ve added a new page with a bunch of content I’ve been collecting on the subject of Interstellar Travel.
Read More
Apple

UILocalNotifications and time zones

Here’s a tip when dealing with UILocalNotifications. If you want to schedule a notification for a specific time using fireDate, you need to apply a timeZone to the UILocalNotification object. Otherwise, iOS will intepret this as an absolute, countdown-based date based on GMT.
Read More
DD-WRT

Switching to pfSense

So after several years of successfully using DD-WRT, I finally decided to move to pfSense. There are a multitude of reasons for this move, but I’ll try to enumerate some of them.
Read More
Ramblings

Cutting The Cord

“When television is good, nothing — not the theater, not the magazines or newspapers — nothing is better. “But when television is bad, nothing is worse. I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.” In 1961, FCC chairman Newton Minnow gave a famous speech bemoaning the state of television. While at the time he was criticizing “game shows” and “formula comedies about totally unbelievable families,” among other things, I would argue that his statements are even more true now than they were in 1961. I remember when cable TV first came to my family. We were living in Florida in the 1980s, and suddenly we had more choice than just four channels. Although it couldn’t have been more than 30 or so channels, there was now choice and and endless stream of things we could watch. Throughout the 90s, we always had cable through all our moves. When I left for college, we had cable in the dorms. When I moved out, I got cable. When I moved to Huntsville, I got cable. When I bought my first house, I got cable. When we moved in 2012, we moved our cable too. The vast majority of my life, I have had cable. And today, for the first time since I was a kid in 1980s Florida, I walked away from cable TV. We cut the cord, and went back to just a standard antenna and an Internet connection. This has been something that has been a long time coming. It’s something we first seriously started considering in 2012 when our daughter was born and we stopped watching a lot of TV. But even then, my dissatisfaction with the ever increasing price and decreasing quality of cable TV had been building since the mid 2000s. So this is why I decided to cut the cord and cancel my cable subscription.
Read More

2014

Apple

360iDev 2014: A Review

So last month I had the pleasure of attending 360iDev in Denver, Colorado. Overall, this was a very good conference. As always, I learned so much from my fellow developers.
Read More
Legal

Some Thoughts on Aereo

So today the Supreme Court ruled Aereo, the Internet TV streaming service, to be in violation of copyright law. And, at least to me, this was not unexpected.
Read More
Apple

101 Ways to Save Apple: A Look Back to 1997

So this post from 1997 titled “101 Ways to Save Apple” made it to the front page of Hacker News today. Ahh, what a great look back to a time that really doesn’t seem like it was that long ago. It was only 17 years ago, but the Apple of 1997 and the Apple of 2014 might as well be completely different companies.
Read More

2013

Ramblings

Why you shouldn't learn to code

The Internet is abuzz with the news that President Obama is calling on every American to learn how to code. And while I think it’s a good idea for everyone to have a basic grasp of computer technology and a basic understanding of the role computer programmers play in the world, I have some very specific thoughts about whether or not everyone knowing how to code is really a good idea.
Read More
Apple

Cocoaconf Atlanta 2013: A Review

So this past week I attended the first (I think) Cocoaconf to be held within a reasonable distance of Huntsville. In this case, a mere 3.5 hours away in Atlanta. Overall, I’d say this was a very good conference. It was small (I’m guessing about 150 or so attendance). The location was easy to get to, and the conference in general seemeed well organized.
Read More
What I Use

What I use: 2013

Since it’s been awhile since I wrote a post about what I use in regards to software, perhaps it’s time that I did that again. So here’s a list software I’m using in 2013:
Read More
Jekyll

dystill moved to Jekyll and Bootstrap

I moved the dystill website to Jekyll and Bootstrap. This was pretty simple overall, since the site is just one page. It was more a task for converting the custom CSS I wrote to use the matching Boostrap libs. I also added the neat little ubiquitous “Fork me on Github” ribbon you see on a lot of sites. Go check it out at dystill.org.
Read More
Jekyll

Welcome to the new robpeck.com!

So you may notice that robpeck.com now has an entirely new look. It’s not just a new look, but a ground-up re-architecture of my blog.
Read More
Parenthood

Nine lessons I've learned since becoming a Dad

On November 27th, 2012, I became a Dad. My little girl, Scarlett, was born at a little past 8pm that night. Being that she’s coming up on nine months here in just a few days, I thought I would look back on what lessons I’ve learned in the nine months since she’s been on planet Earth. This post could alternatively be titled: What I wish people had really told me before becoming a Dad.
Read More
Apple

Creating an iTunes Dropbox on a Mac

Download I recently added a Mac mini to my setup at home, that I’m using to drive my in-home “video on demand” system. With many of the TV’s in the house on AppleTVs, any TV in the house can watch any movie in the library at any time. I put the mini (headless) in the closet, along with the Drobo and a printer. But, the new Mac mini lacks an optical drive. So, how to continue ripping the DVDs I already own? The solution, it turns out, is to continue doing the actual work on my iMac when it comes to ripping, filtering the files through iDentify and MetaX. But I don’t want to have to go to screen sharing on the Mini and add a file to iTunes. I want that to happen automatically. That’s where Automator - one of the most underrated pieces of software that comes with every Mac - comes in. With Automator, you can attach an action to a folder, so that that action will be performed whenever anything is added to that folder. So here’s what I did to get files from a folder into iTunes: Create a folder somewhere on your system. I put mine in my user directory. Open Automator. From the dialog box, select “Folder Action.” At the top, where it says “Folder Action receives files and folders added to,” select “Other” and select your new folder. Search for an action called “Set Var of Value”. Drag that action over to the right. From “Variable” select “New Variable.” Call it “Source” Search for an action called “Import Files into iTunes”. Drag that action over to the right underneath the variable action. Be sure to select “Library” from the empty dropdown. Search for an action called “Get Var of Value”. Drag that action over to the right underneath the iTunes action. Be sure the selected variable is “Source”. Search for an action called “Move Finder Items to Trash”. Drag that action over to the right. Search for an action called “Run AppleScript.” Drag that action over to the right. In the AppleScript action, paste this: on run {input, parameters} tell application "Finder" to empty trash return input end run Save the action. You’re done.
Read More

2012

Apple

On Apple and Maps

Unless you’ve been living under an Internet rock, you know that Apple released a new version of iOS, featuring a much heralded new mapping application. This application replaces the old Google Maps application. You also probably know that it has been roundly criticized and mocked. Now, before anyone accuses me of blatant fanboyism (although, without a doubt, I am an Apple fanboy), the new Maps application sucks. It’s a step backwards from the previous one, lacks some of the features the previous one had, and is generally disappointing. I especially don’t like how it didn’t degrade gracefully on the iPhone 4 (no voice turn-by-turn, and, for some strange reason, it seems unable to count down the distance to the next turn). In general, it clearly isn’t ready for primetime, Apple over-advertised it, and I think Apple could have done better. But, other than being disappointing, the new Apple Maps application has no real impact on me because I never used the old one. It sucked also, and I think people are forgetting just how badly the old one was allowed to languish without significant updates. The Google-based Maps app in iOS 5 was essentially unchanged from the one that was released 4 years earlier. Here are some things that were wrong with the “old” Google-based Maps app as of iOS 5: No voice turn-by-turn directions. Had to manually advance while driving when doing directions. Searching sucked: you had to spell things exactly right or it wouldn’t find them. For some reason, it thought I lived in Madison, Wisconsin all the time. If you live in anywhere but a large city, it was essentially useless for navigation. You could maybe find an address, if you knew exactly what you were looking for, but you couldn’t safely get there in a car. In short, I long ago abandoned the built-in mapping application for better services available in the App Store. I’ve been using the free Mapquest app for the last couple of years, and it’s worked great. It’s gotten better with each iteration, to the point where it’s pretty much my sole navigation app now. My process flow as usually been search for an address in Safari (or get it from another app like Yelp or Urban Spoon) and paste it into Mapquest, or search in Mapquest (usually with a little better luck than the built-in Maps app). Bam. Reliable turn-by-turn voice navigation that actually works on my iPhone 4. Pair that with the new Bluetooth stereo I put in my truck, and it works like butter. If it would just do night colors, it would be damn near perfect. Now, if I lived in a big city, I can understand being upset with the new app. Lacking walking or biking directions and public transit directions seems a curious oversight considering how many of Apple’s iPhone customers live in those environments. I can’t really understand why Apple would leave that data out unless they simply could not get it, and I hope they address that soon for all the people that do rely on it. But, for me, the new Maps app sucks just as much as the old one did, and I will not use it just as much as I didn’t use the previous one. So, for me, nothing has changed. Just, more than anything though, I’m disappointed in Apple, for two reasons. One, that they couldn’t get this right, and two, that they released it in spite of what they had to know were serious problems with it, all while heralding it as one if iOS 6’s killer features. Especially in light of news that they still had a year on their Google maps contract, I would rather they have spent that year fine tuning it rather than release something that clearly wasn’t ready for primetime.
Read More
Apple

Merging M4V files on a Mac ... with chapters!

As I’ve mentioned a couple of times before, one of my projects right now is ripping all the DVDs I own so that I can watch them on my AppleTV (or any AppleTV in the house). Well, one of the problems I’ve run into a couple of times is longer movies that are distributed on two discs. This is usually movies like the Lord of the Rings Extended Edition or The Ten Commandments. Really, they’re one movie, but are distributed as two separate movies because of the restrictions of physical media. Well, digital media imposes no such restrictions on us, so why have two separate movies listed on the AppleTV? So after much trial and error, I finally discovered a way to get everything play nicely together. Unfortunately, this is not an easy problem to solve and even involved me writing a small script that could merge chapter files together because every single method I could find would eliminate chapter markers. So here, in abbreviated form, is the process for merging m4v files together and preserving chapter markers. Note: This tutorial assumes some level of technical proficiency. This is not a point-and-click process (yet :P) and requires the use of multiple tools and the shell. Tools you’ll need: Handbrake or whatever tool you’re using for ripping your legally obtained DVDs. MetaX and/or iDentify Subler remux Quicktime, which is now built into Mac OS X. chaptermerge, a script I wrote that merges chapter files together. The proces: Rip both movies from their individual DVDs using Handbrake or whatever other tool you’re using. Be sure that you’re adding chapter markers. Load each movie into MetaX and download the chapter names. That’s really the only thing you need to add to the file. Save the files with chapter names. Load each movie into Subler and extract the chapter files. To do this, select the chapter track and select File -> Export. Now, open the first movie in Quicktime. Drag the second movie on top of the first one. Quicktime will add the two together. Save the movie for use on an AppleTV. Get a beer or 6, because this takes awhile. While the movie is saving, use chaptermerge to merge the chapter files together. See the docs on how it works. Once the file has finished saving as a Quicktime MOV (it’s actually still h.264 inside the file), fire up remux and convert the merged file back into an m4v. Drag the file into remux, set the output to m4v, and save. Should be pretty quick - a matter of minutes. Load the merged file back into Subler and add the merged chapter track. Drag the chapter file into the Subler window. Save the file. Load the merged file into a tool such as iDentify or MetaX and add the remaining metadata. That’s it! You now have a merged file with both parts of the movie, accurate chapter markers and full metadata, ready to be copied to iTunes and viewed on your AppleTV.
Read More
Ramblings

NBC and the Olympics

It’s always amusing to watch what happens when old media slams head first into a new world. NBC, the broadcaster holding the rights to Olympic coverage in the United States, seems not to have realized how much the world has changed since Beijing in 2008. Social media is huge now - much more so than it was then - and people routinely have access to a much larger amount of information than we did back then. Whereas most countries saw it, or could at least access it, in realtime, NBC decided to show the Opening Ceremonies on a 3 hour tape delay so they could cash in on the larger primetime audience. I actually had to turn Twitter off yesterday afternoon because I was already seeing tweets about the Opening Ceremonies from people in other countries and at least one person I know who was actually at the thing. Now, to their credit, NBC is actually streaming a lot of coverage live on their website and showing highlights for the American audience in primetime. So why not do the same with the Opening Ceremonies? Why not stream it live on the website for those of us who might have wanted to watch it in realtime, then show the tape delayed version later for the larger audience? Well, someone asked NBC that and this was, no lying, their response:  “They are complex entertainment spectacles that do not translate well online because they require context, which our award-winning production team will provide for the large primetime audiences that gather together to watch them,” the network told the Wall Street Journal. Right, because we’re all bloody mouth-breathing morons who can’t figure out what’s going on without their precious context. Is this the same “award winning production team” that didn’t know who Tim Berners-Lee was or realize the significance of the computer he was sitting at? Tim Berners-Lee is why I have a job. Tim Berners-Lee is why I’m able to type this right now, and why an economy that generates billions of dollars every year exists. The British thought it important enough to salute him in the Olympic Opening Ceremonies. They didn’t even know who he was? Is this the same “award winning production team” that made cracks about Kim Jong-Il while the North Korean team was walking in the parade of nations? Yes, he was a brutal dictator and his “11 holes in one” story is laughable to say the least. But first of all he’s dead now, and second the Olympic Opening ceremonies are not an appropriate time or place to be cracking jokes about other countries’  deceased leaders. I wonder if the BBC called Mitt Romney (who was sitting in the audience) “the American Borat” or made cracks about the French president? Is this the same “award winning production team” that never mentioned that Kenneth Branaugh was playing the role of Isambard Kingdom Brunel, perhaps the greatest engineer that ever lived? Here’s a clue, NBC: anyone with two brain cells could figure out what was going on, and your “award winning production team” was annoying. Not to mention the advertising EVERY FIVE MINUTES during the parade of nations got really, really old.
Read More
Ramblings

Penn State

As a college football fan, I would be remiss if I didn’t at least have some thoughts on the biggest scandal ever to hit college sports. I remember when this first started to surface last year. I was very cautious at the time as everyone around seemed to be out for a pound of flesh. I generally try to avoid mobs and witch hunts - what I most wanted was to let the investigations play out, and find out who knew what and when did they know it. Because only once we know the facts of a case can we truly sit in judgement. Well, now we know the facts, and it’s worse than I could have ever imagined. Now, I haven’t read the Freeh report - I really haven’t had time (or desire) to digest a 227 page report detailing the actions of a child molester and the people who enabled him, even after they knew. But the report is the probably the single most damning thing ever to land on a college athletic program. It eclipses Kentucky’s point-shaving in the 50s. It eclipses Louisiana-Lafayette’s academic shenanigans in the 70s. And it most definitely eclipses SMU’s “Pony Excess” in the 80s. This is, without a doubt, the worst, most rotten thing I could possibly imagine. I don’t think this would have even been imaginable 15 years ago. And yet, here we are. All of those cases pale in comparison to what happened at Penn State. As the report details, the problems at Penn State were wider than just the football program. Many, many people, from the President down to janitors, knew what was going on … but nobody said anything. A culture of silence and, more importantly, a reverence for athletics beyond all reason, pervaded everything that happened in State College. Nobody would go against, or risk threatening, the almighty sacred golden calf that was the Penn State football program. For all intents and purposes, Penn State football and Joe Paterno were sacrosanct and any attempt to confront them would elicit the highest orders of outrage. What happened to those kids was terrible - and the justice system will see to it that those responsible are held to account for their crimes, as will the completely justified lawsuits which are sure to follow. But there are some other points surrounding this whole thing that I think are worthy of pondering here as well. For the longest time, I held Joe Paterno and Penn State as the paragon of stability that all athletic programs should strive for. I mean, here was a guy that was head coach for 45 years. In that same time period, Auburn had six coaches and Alabama had eight. In retrospect, I can’t help but wonder if that same stability allowed a culture to flourish that enabled something like this to happen. Is it good for one person to be allowed to accumulate so much power and hold it, unchecked, for so long? Would a few changes in administration have helped deter this situation? I would like to think so and, in truth, it may. But think the problem is bigger than Penn State and cuts right to the heart of the worship of college athletics in the United States. This same “athletics can do no wrong” culture can be seen at many major Division I schools. I mean, in my heart I would love to believe that something like this could never happen at Auburn. But I also cannot discount the power that the athletic department holds. The same can be said for Alabama, LSU, Oregon (whose program I think is absolutely rotten to the core on so many levels) and so many programs. Can I honestly believe that a janitor who sees something like that janitor at Penn State saw and has to decide between his job and reporting will do the right thing? And even if they keep their job, would have to constantly be on the lookout for some crazed “fan” much like we hear every week on Finebaum to do something insane? That’s the thing about this whole sad situation that I don’t think is getting enough discussion. This scandal is an indictment of the worship of athletics that pervades colleges across the US. Penn State just took that same worship that happens at every Division I program and turned the knob to 11. As a result, a culture of silence allowed a child molester to run rampant for years with the full knowledge of many people, who placed covering up for the name of the Nittany Lions above doing the right thing. This. Has. Got. To. Stop. The thing that is so damning about all of this is that it’s not the oh so loved “lack of institutional control” that we usually hear about when it comes to sports scandals. In this case, the institution was in such complete control of every aspect of Nittany Lion culture, that no one would dare go against it. This is unique, uncharted waters for college athletics. Now, I don’t know what the NCAA will do, if anything. Frankly, my opinion of the NCAA is right down there with the UN in terms of being able to do anything useful. But if there’s any justice in the world, the NCAA will drop the hammer on Penn State and end the program. At least for a couple of years. And if the NCAA doesn’t do it, Penn State should, for once, do the right thing and pull the plug themselves. Shut everything down, cool everything off and, in a few years, return with a new focus on what is really important. Because even though all the people responsible are gone, the culture is still in place. You have to change the culture. Yes, I said it. I’m talking the Death Penalty. A slap on the wrist - a few scholarships lost, a TV or bowl ban - would be insulting. To do anything less in this situation is to condone the very attitude that allowed Jerry Sandusky to molest children for years. A message needs to be sent, to universities and fans across the nation that there is a line of acceptable behavior and culture when it comes to college athletics, and that Penn State flew over that line at supersonic speeds. There must be accountability. SMU paid some played. Kentucky shaved some points. But at Penn State, a culture of silence and reverence for athletics enabled a child molester to go unchecked, with full knowledge of the administration, for years. If that’s not worthy of the ultimate penalty, the entire NCAA is s sham and should itself be disbanded. For the average college football fan, this should be yet another sobering reminder of the dark places that operate at some of alma maters. For as much as we would like to believe in the purity of sport, this scandal - perhaps the saddest and worst ever - indicates of the depths to which evil can spread.
Read More
Reviews

A Year With Drobo: My Review of the Drobo FS

About a year ago, I picked up a Drobo FS. It was something I had been wanting to do for awhile to support my ever growing data needs. In particular, I had three problems I was aiming to solve: Data security. In addition to the obvious suspects of photos and home movies, I have a lot of old files and documents I’ve been hanging onto for  years now. Papers I wrote in high school and college, some of the first computer code I wrote, etc. How I’ve managed to preserve some of this over the years is a miracle in itself - a lot of it was recovered a few years back when I picked up a 3.5” floppy disk drive and started going through boxes of floppies in my attic. But now that I have it all in one central place, I’d like to secure it. Media library. My wife and I own a lot of DVDs, and they take up a lot of space. They’re also not very portable. A while back I started the process of ripping all my DVDs into iTunes so that they could feed to any TV in the house with an AppleTV, essentially creating our own private video on demand system. This was rapidly outpacing the available space. Central backup location. I wanted a place where all the Macs in the house could backup to via Time Machine. After doing a great deal of research, I decided on the Drobo FS. In addition to being able to do all of the above, it had some other nice features that I liked: Thin provisioned, meaning you can hot-swap drives in and out while the device is running and not have downtime while it rebuilds the array. Also thanks to thin provisioning, your drives (theoretically, I’ll get to this in a bit) don’t need to be the same size or from the same manufacturer. Data protection that purports to examine the health of a drive and move data around to give it the best change of preservation. Now, to be sure, you’re trusting a black box. If Drobo fails, there is almost no cheap way to recover that data as they use non-standard, proprietary technology to accomplish all their voodoo magic. Nonetheless, in this case, it was a tradeoff I was willing to make. So after a year of ownership, how does Drobo stand up in fulfilling these promises? Well, there’s a lot of stories here but, overall, it does well with a few caveats I’ve learned along the way. When I first ordered the Drobo, at the same time I placed an order for 3 identical Seagate 2TB drives. I got them installed and got the array up and operational, and got all my data from various places moved over to the Drobo. The first thing I noticed while copying data over to the Drobo was that it was slow. Very slow. Transfer speeds to the Drobo across my gigabit network were in the ~10 megabit range. Upping the frame size to jumbo (9000) improved that a little but it was still very slow. Not a deal-breaker, as you’re rarely moving that much data around, but it was something I noticed. Then, the real problems started. The Drobo would just randomly vanish from Finder. No reason, just one moment it wouldn’t be there and you couldn’t even connect to it via IP address, although you could still ping it. I opened a support ticket with Data Robotics, who took me through a troubleshooting procedure that involved directly connecting the Drobo to my Mac via Ethernet. Of course it would work fine when we did that, so I figured it was a problem with my network. But even creating the shortest possible path between my iMac and the Drobo yielded the same results. I opened another ticket, and we went through the same procedure again. This time, however, we let it sit longer. Sure enough, about 30 seconds after it booted and appeared in Finder, it disappeared from Finder and from their Dashboard tool. I was able to SSH to it and see that the filesystem was now mounted in read-only mode. But we were able to get some diagnostic log files off of it. The tech looked them over and said that the drives were failing. And sure enough, the next day, Drobo reported one of the drives had died and that it was moving data around to protect things. Now, in my entire life, I’ve had 2 hard drive failures, with one occurring just a couple years ago. So I popped online and ordered another drive (this time, a Western Digital Enterprise 2TB drive). Popped it in the Drobo and it seemed happy, although still slow and occasionally vanishing from finder. Then, about a month later, I’m out working in the yard and boom, get an email on my iPhone about a second drive failure in the Drobo. So I order another Western Digital 2TB drive and put it in. The whole time, by the way, Drobo remains on and accessible. Pretty cool actually. And replacing a drive is pretty easy - you just pop the old one out and put the new one in, without even shutting down. Drobo then goes into a “protection” mode where it shuffles data around onto the new drive. But, with 2 of the 3 Seagate drives I bought failing within 6 months, I decided it probably wasn’t wise to continue to trust that last one since it was probably from the same batch. So I replaced that one as well. That was about seven months ago, which brings us to today. Between the better drives and several firmware and software upgrades from Data Robotics in the interim, Drobo is now virtually rock solid and (knock on wood) I haven’t had any further problems. It no longer randomly disappears from Finder or the Drobo Dashboard and, in a very Apple way, it just works. And I also want to say that, throughout the troubleshooting process, the Data Robotics guys were great to work with and wanted to see the problem solved. So overall, after a year and some growing pains, I’m pretty happy with it and wouldn’t hesitate to recommend the Drobo FS with the following caveats: Use quality drives. Don’t buy the cheap drives and definitely avoid Seagate drives as the Drobo seems to hate those. When I upgraded to the WD drives, I bought the server-level Enterprise drives. Those have been rock solid. My guess is that Drobo is pretty hard on drives, with lots of reads, writes and seeks. Use the same size and manufacturer. Now, one of Drobo’s big selling points is that you can use different size drives and all that. This is one of those cases where what you can do and what you should do are two different things. You can use any size and manufacturer, but I’ve had better success and performance when all my drives are from the same manufacturer and are the same size. Be sure your firmware and software are up to date.  Kinda goes without saying, but the firmware upgrades for the Drobo have really helped with it’s stability. If you start having problems with your Drobo like I had above, get ready for a drive failure.
Read More
Apple

Restoring a Mac from a Time Machine backup on a Drobo (or other network storage)

Been having some problems with my iMac upstairs. I’m pretty sure the hard drive is failing (again), although hopefully it’s just bad sectors. But, with hard drive prices currently still in the stratosphere, I decided to try one more last trick to see if I can save myself some money. That is, the old Windows trick: fdisk, format, reinstall. Or, well, the Mac Equivalent - Disk Utility, reinstall. About a year ago, a bought a Drobo. I’ve been meaning to write a review of the Drobo and maybe now I will (the short of it is, I had some growing pains with it, but now that I’ve figured out its quirks, it seems to work well). One of the reasons I bought the Drobo as to use a shared Time Machine backup store for all the Macs in the house. So, I thought, in addition to trying to save my Mac, now would be a great time to test my fancy Time Machine backup system. And, unfortunately, since Time Machine really isn’t meant to work with unsupported network volumes, it does require some gymnastics to get it to work. Even worse, it isn’t a very well documented procedure. But, ultimately, I was able to figure it out; I’ll post what I did hoping that maybe it will save someone some time and headache. First step is to format and reinstall as you normally would. If you are on/installing Lion, you may be presented with an option to reinstall from a backup as part of the install process. Don’t do this. Reinstall Lion as if you were performing a fresh install. When the installation is complete and you get to the Lion post-install setup screens, you will (eventually) reach a screen asking you to create a user account. Create your original user account (same username) as in your backup. Once you’re out of setup, go to System Preferences, then Users. Create a new administrative level user (I called mine “foo”). Be sure this is an admin-level user. Log out and log into the account you just created. Turn on unsupported Time Machine volumes. Open up a Terminal window and enter: defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1 Now, open up a finder window and navigate to your Drobo or other device and to whatever network share you have your Time Machine backups stored on. Mount it. Now, inside the share, mount the .sparsebundle that is your restore image (it should be the machine name). If you open it, you should be able to see a folder called “Backups.backupdb” in it. Next, fire up the Migration Assistant. Select “From another Mac, PC, Time Machine backup, or other disk.” Hit continue. Select “From a Time Machine backup or other disk.” Hit continue. It may take a second, but, eventually, you should see a drive image and the name of your old hard drive (usually “Macintosh HD”) appear. Click continue. It may take awhile for it to parse the image. My backup image was about 350gb, and it took about 20 minutes to parse out all the information. Select what you do or don’t want and click continue. You should be presented with a dialog stating that a username on the system is the same as one in the backup. Select “Replace…” Click continue. Wait. It will take awhile. It took about 5 hours for me to do a complete restore from a backup on the Drobo to my iMac. And that’s it. Once it finished and you reboot, your Mac should be just as it was during the last backup.
Read More
Apple

Mac Developers: Clean Up Your Output!

Over the weekend, I was having some hard drive issues. While I think I fixed the issues, I’ve been keeping a close eye on my console (Console.app) to look for any hints that the issues are more major than those that can be repaired by Disk Utility. However, while watching my console, I noticed something: there are a LOT of spammy Mac apps out there! Most Mac/Objective-C developers are aware of the NSLog() function, which, while in an Xcode environment, outputs data to the Xcode console. It’s usually one of the first things a new developer learns about and it’s very useful for debugging. What many developers may not realize is that NSLog() continues to output data to the system logs even when the app is not being run from within Xcode. As a result, the console fills up with messages that don’t mean a whole lot to people looking at the console. Now, I don’t want to come across as saying you should never use NSLog() outside of Xcode. There are times when outputting debug data to the console is fine. But some of the things I see are people echoing objects into the log or short text strings that are obviously method names. These aren’t helpful to people looking at the console and, arguably, aren’t helpful to a developer once an app is in the wild. Once your app is in the wild, data in the logs should indicate error conditions in your app. NSLog() is fine for debugging in Xcode, but you should be careful to remove them when you’re done. A good question I ask myself before leaving an NSLog() in place is, “if a user filed a support request with this data, would it help me fix their problem?” Most of the time, the answer is no. So before releasing an app, do a quick search in your project for all uses of NSLog() and evaluate whether they are really needed.
Read More
Ramblings

Don't be a PHP / JavaScript / Java / Ruby developer - Be a Software Developer

Among the many sites I follow for programming discussion is /r/PHP on reddit. While most of the discussion is more user-based than I would like - things like frameworks, use of PHP-based software packages and the like are usually discussed more often than actual programming - there are occasionally a few gems worth chiming in on. But it never fails that, at least once a week, I see the headline “How do I become a PHP developer,” or “What do I need to know to be a PHP developer?” My answer is simple: don’t. Just stop. Don’t be a “PHP Developer.” Don’t be a “Java Developer.” Don’t be a “Ruby Developer.” In fact, don’t be any kind of developer that depends solely on a single language. Languages come and go. Ten years ago I would bet the majority of web programming was still done in Perl. Fifteen years ago the web was still widely misunderstood and Java was promising that we would only have to write code once to run on any computer. Twenty years ago you found C, FORTRAN and COBOL on mainframes. Every few years a new language comes around and everybody moves to it. Sometimes they stay around, and sometimes they don’t. C has been around for many years and is just as valid now as it was twenty years ago. Even if you’re programming in C++ or Objective-C (both of whose roots go back further than you probably realize), you still need to understand the fundamentals of the C language. Will we still be using Clojure in 20 years? How about Coffeescript? Who knows. Maybe. Maybe not. My point is, don’t chain yourself to a single language. If you do that, you will be forever behind the curve. A good developer should be able to work independent of his/her tools, should be always willing to learn new and exciting things, and should be able to apply lessons learned in past development independent of the language they are working in. A good developer should be able to come up to speed quickly on a new language. And while it is true that every developer will probably always have a preferred language and a language they’re best at, we as developers should always place the craft of software development ahead of specialization in a single language, and we should be willing to use the best tool for the job independent of our linguistic preferences. While PHP is my primary language (and what pays the bills), I am not a PHP developer. I am a software developer who works in PHP among many other languages. It should always be the goal of every developer to remain at the forefront of our craft. That means not chaining ourselves to PHP, Ruby, JavaScript, Java, Scala, Python, or any other language.
Read More
Apple

Disabling Text Zoom in Netbeans

A couple of days ago, I upgraded to the most recent version of Netbeans - 7.1.1. I had been running a 7.1-DEV nightly from back in 2011 and just hadn’t bothered to upgrade yet. The first thing I noticed is that this version of Netbeans introduced a “feature” that allows you to zoom in or out of text. This is accomplished by, on the Mac, holding down the Command key and scrolling on the trackpad. The problem with this is that it is very easy to trigger accidentally - to the point where I was doing it multiple times a day. Even more irritating, there was no indication as to what the zoom level was or easy way to revert to normal view. If you trigger it accidentally, you just have to kinda zoom back out until you find a setting somewhat similar to the rest of your tabs. Fortunately, someone on the nbusers mailing list mentioned how to solve this problem, so I want to post it here in case anyone else gets as lost and frustrated as I was. Open the preferences page. On the Mac, you would go Netbeans Menu -> Preferences. Go to Keymaps. Search for “zoom”. Remove the bindings for “Zoom Text In” and “Zoom Text Out.” Double click on the Shortcut and hit backspace twice.
Read More
News

New Personal Blog

While this blog will still have my occasional musings about life in dot-com and software development, there’s a lot of other stuff I’d like to talk about that really doesn’t fit under that label. Carpentry? Home improvement? Relationships? I need a place to put a lot of that stuff. Well, me and my wife will be (re-)launching a new blog documenting our life. So if you’re interested in us at a more personal level, feel free to check out the new blog. www.robandsarah.org
Read More
Business

Professionalism and respect: raising the bar for developers (and myself)

This article and the accompanying discussion on Hacker News really got me to thinking tonight. I’m not going to say much about the post itself other than that I agree with Dan’s sentiments. I don’t know who in their right mind would address a guest at a professional conference using the term “sexy.” But it did get me to think a little bit more about professionalism, professional behavior and how it relates to software development. We as developers, and especially those of us in the Internet world, are used to a certain level of what would be traditionally considered non-professional behavior when it comes to the workplace. Most obviously, there’s the dress - T-shirts, jeans or shorts (depending on your climate) and sandals are common dress. Many companies’ offices are outfitted with lots of things you would not find in a traditional office - ping-pong tables, beer kegs, beanbag chairs. It’s all very collegiate. We tend to have very little patience for those who “don’t get it” - every developer has probably at one point labeled a user a PEBKAC. And then there’s the language - I think developers might be second only to sailors in finding creative ways to swear. Essentially, we get to be big kids. It’s a pretty sweet gig! I think a lot of this is because we, as developers, value one thing above all else: the ability to deliver. As I think about it, I can remember working with some brilliant people - and some of them had absolutely no social skills and no idea that some of their behaviors were not just unprofessional, but outright disgusting. If you can ship quality, it doesn’t matter if you wear a suit and tie every day or you wear a threadbare T-shirt and haven’t shaved since Nirvana first hit the radio. To us as fellow developers, what you produce is what matters above all else. As one comment said: The programming world is so used to breaking the norms, revolutionizing industries, and wearing T-shirts and sneakers to work that we forget, sometimes, that some aspects of “professionalism” actually do serve a purpose. While these things may be “okay” in our culture - the culture of dot-com, the culture of software developers - to outsiders, we are baffling, uncouth, at times rude and definitely unprofessional. Now, if you’re working in a startup, you’re probably around only a few other people who are like minded and are part of the culture and won’t think anything of strange behavior as long as you ship. My last job was with a startup that was 4 months old when I joined the team and was still very small. I remember hearing a story about someone in the company who, during a long night of coding in a small office, just got up, took his pants off, sat back down and started coding again. This may be kind of an extreme example, but this general kind of behavior is considered the norm for developers, especially in Internet startups. But, there comes a time when we have to drop - or at least tone down - the unprofessional behavior and actually start taking business seriously. I’m not exactly sure what that point is, but it’s probably about the time that people who are not part of “the culture” become involved. Marketing, sales, business development, management, accounting, and other more traditional business fields are not part of our culture and they don’t get our ways. Once these people become involved, and definitely once/if they outnumber the developers, we must begin to accept the fact that we have to modify our ways a little bit. The thing is, we criticize them as being “stiff,” “squares,” “boring,” “demanding,” “not getting it,” and the like. We begrudgingly work on tasks for them, the whole time complaining to our coworkers in our culture about what we have to do for marketing, or accounting or whatever and how they just don’t see the big picture. But we are unwilling to meet them even half way when it comes to working in a professional environment. I don’t know if they’re trying to understand us, but are we even trying to understand them? Over the last couple of weeks, I’ve been trying to raise the bar for myself a little bit when it comes to being professional. No more T-shirts and jeans or taking shoes off. I’ve tried to stick to “business casual” dress, although it’s tended to be a bit closer to the casual side (I still wear sneakers and my shirt is almost always untucked). But I’ve worn collared, button down shirts and khakis - something that would have been unthinkable a year ago. I’m actually even thinking about wearing a tie occasionally. I’ve been trying to tone down the language and start thinking respectfully about each task regardless of it’s interest factor. I guess what I’m trying to get to in my admittedly rambling diatribe is that professionalism starts with respect: respect for ourselves, respect for our craft, respect for our employers, respect for our coworkers whether they are developers or not, and respect for our peers. We need to begin to have more respect for what we do as a craft and profession, and more respect for the people we encounter every day. We should always strive to treat everyone we encounter with the respect they deserve at the very least as fellow human beings. That means not referring to users that break our software as idiots and not referring to women presenting at conferences as sexy.
Read More
Apple

Mac Oil Price Widget, Version 2.2 released

Another small update to the Mac Oil Price Widget has been released. This fixes a small bug that resulted in the negative symbol continuing to be visible. Not really necessary because the widget itself says “Up” or “Down.” You can download it over on it’s page.
Read More
Git

gitcreate

I’ve created a new repository on my GitHub account where I can commit some of the little scripts I’ve written for use on my server. The first one I’ve committed is gitcreate, a small script that automates the creation and bootstrapping of git repositories. I realized that, when I was creating a new repo on my server, I do the same things over and over. Create the repo, then add in some frameworks for whatever little thing I’m playing with at the time. Well, gitcreate can do all that for you. Create the repo and bootstrap in things like the most recent versions of CodeIgniter, jQuery, and Bootstrap. That way, when you clone the repo to start working, you’re already ready to start coding. Like most of my stuff, it’s licensed under the New BSD License.
Read More
Apple

Mac Oil Price Widget, Version 2.1 released

A small update to the Mac Oil Price Widget has been released. This fixes a couple of bugs that would cause all prices to be displayed as positive and for the percentage of change to not be accurate. You can download it over on it’s page.
Read More
Apple

The Right Way to Create an iCloud-enabled Mac App in Xcode

Because I’ve encountered this problem twice, I’m going to do a little write-up about it. As much for me as for the next person who encounters this problem. In a very un-Apple way, this process is very poorly documented and very un-intuitive from a user-developer standpoint. Everything that’s here, I’ve culled from Googling about aimlessly and finding on Stack Overflow. **Symptom: **You create a new app in Xcode with no changes and launch it. It launches just fine. You then go to the target summary settings and click “Enable Entitlements” and have an iCloud key/value store and or containers. Now you launch it and nothing happens. Nothing appears, but Xcode still thinks the app is running. **What’s Happening: **To understand what is happening, you have to go have a look in the Console application (note, the actual system Console.app, not the debug console in Xcode). Open that up and select “All Messages”. Look for something that looks like this: 1/28/12 7:49:03.945 PM taskgated: killed <your app ID>[pid 43838] because its use of the com.apple.developer.ubiquity-container-identifiers entitlement is not allowed What’s happening is that taskgated is killing your app because it’s not properly signed to use iCloud. And for some reason that is not entirely clear to me, the app being killed is not at all reported back to Xcode - Xcode thinks the app is running. So you just sit there waiting for something to happen with no clue that this sinister lurking background process has killed your app. How to fix it: There are two ways you can go from here to fix this. The first and easiest, if you are just turning on entitlements and aren’t intending to use iCloud, you can just remove the iCloud Key/Value Store and iCloud containers from the target summary. After doing this, it should work. But, if you are making an iCloud-enabled app, there’s a long list of things you need to do. First, understand that you need to be a paid member of Apple Developer Program. Log into ADC. Go to the Mac Dev Center, and the Developer Certificate Utility. Create an App ID by going to App IDs and clicking the Create App ID button in the upper right. Enter the name of your app and the bundle identifier. It usually looks something like “com.company.app”. Click Continue. Your app ID should be entered. Click the App ID you just entered, then click “Enable for iCloud.” Click save. Next, go to Certificates. If you haven’t created any certificates yet, click “Create Certificate” in the upper right and follow the directions. Note, you need both a development and an application certificate. Next, go to Systems. Be sure you’ve added your Mac (and, for good measure, any others you’ll use for development). Finally, go to Profiles. 1. Click Create Profile in the upper right. 2. Select "Development Provisioning Profile" 3. Give it a name. 4. Select the app you created in step 3. 5. Select the certificate you want to use. 6. Select the systems you want to use (I did all). 7. Click "Generate" It may take a few seconds, then it will give you a download. 8. Open the downloaded profile. It will open in the "profiles" preference pane (which doesn't seem to appear until you try to install a profile). Click install. Now, in Xcode: 1. Go to Window > Organizer. 2. Select "Devices" on the top, and "Provisioning Profiles" on the left. 3. At the bottom, select "Automatic Device Provisioning" at the bottom, and click "Refresh". If you've never done this before, you'll need to log in with your ADC username and password. 4. Give it a second, it should pull in your profiles. 5. Go to your project, select your app target and select "Build Settings." Scroll down to "Code Signing." You may need to go to "All" from "Basic" in the predicate selector. 6. Under Code Signing Identity, select the dev profile you just created. Note: don't use the wildcard one - it doesn't seem to work. Whew. Now, if everything went as planned (and you sacrificed a goat to Tim Cook and Tim found your sacrifice pleasing) you should be able to launch your app with no errors. But help! I got a weird failure on build! If you get a failure on build that looks like this: Command /usr/bin/codesign failed with exit code 1 Then it is possible that your developer certificate is set to “Always Trust” in Keychain. It needs to be set to “System defaults” for reasons that escape me entirely. Note, this may not be entirely accurate and may even be cargo-cultish. But I’ve encountered this “issue” twice now (once in December, and once now) so I decided to write down my steps so that, in a few months when this befuddles me again, I’ll know where to look for the answer.
Read More
Ramblings

What An Awesome Future We Live In

Sometimes it’s easy to forget what an amazing modern world we live in. Even if I think back just 10 years ago, it blows my mind how much has changed. Just in technology, even. In 2002: Nokia was the largest cellphone manufacturer. Their top selling model that year was the Nokia 6100. I actually had one of these as a loaner phone once. At the time I was carrying this more modest model - a Qualcomm QCP-2700, complete with green screen. Tablets as we know them today didn’t exist. Oh sure there were primitive early tablets - Palm Pilots and the Newton come to mind. But they had as much in common with today’s tablets as a horse does with a Ferrari. HP was the leading computer manufacturer that year - following their purchase of Compaq. The same HP that almost sold it’s computer division late last year. Facebook and Twitter didn’t exist, and the best site on the web for tech news was still Slashdot. Wikipedia had just opened the year before and was still seriously lacking content. Mac OS X 10.1 was released that year, and I spent all summer lusting over the Titanium Powerbook G4 with it’s PowerPC processor running at a blazing 800 megahertz and a huge 40gb drive. If you wanted to read a book, you bought a paper book. e-Book readers, while the existed, were clunky and difficult to use, and titles were mostly restricted to technical publications. Nothing like the Kindle, Nook, iPad and other readers. Using the Internet on a mobile device, if it was available at all, was extremely limited. Remember WAP? I remember being amazed in college that I could use my phone to check the scores of other games while I was at an Auburn game. Wanted to find your way around? You had a map or directions. GPSs as we know them today didn’t exist, and certainly weren’t integrated into phones. Contrast that to today. The phone in my pocket is more powerful, has more storage, than that laptop I spent a whole summer lusting over, and can be used to surf the web just as well as any computer. The tablet I carry with me has access to a whole library of books, can connect wirelessly to the Internet almost anywhere, and can be held with as single hand. If I ever get lost, I can pull up a map on my phone that pinpoints my location to within a few yards of my area, and can give me turn by turn voice directions to get where I’m going. Facebook and Twitter connect millions of people together. I can even connect to the Internet on my laptop _in an airplane at 35,000 feet! _Downstairs, I have a 60” widescreen TV that’s 1.5” thick and weighs so little that I could mount it on the wall. Every time I hear people complaining about how things suck, I’m reminded of this video. Because everything really is amazing right now. We are living in an amazing futuristic world full of fascinating advancements that are are happening all the time. And what is most amazing of all is how quickly we got here. The world of tech between now and 10 years ago are so different. What will the world of 10 years from now be like?
Read More
Apple

Mac Oil Price Widget, Version 2.0 released

After a far longer wait than was intended, the Mac Oil Price Widget version 2.0 has been released. It was completely rewritten – like, I didn’t even look at the old code – and uses Bloomberg Energy as it’s information source. The display was also simplified – I really didn’t care about the chart in the old version, so the new version prominently displays the price and how much it’s changed. Download Here!
Read More
Ramblings

Personal Initiatives

About ten years ago (summer of 2002), while I was working in Yellowstone National Park, I took a lot of time that summer for personal reflection. The the rocks beside the Snake River and the roof of the cabin where I lived became close companions of mine. I took a lot of time to examine where my life was at that time, and there were a lot of things that I didn’t like. Towards the end of the summer, based on my reflections, I started writing a short series of notes to myself. I titled these “Personal Initiatives” and set out what I wanted to change and how I was going to go about doing it. There were probably 50 or so entries. Some of these were fairly arcane and maybe even silly. Among them: Get rid of my acne by washing my face twice a day. Wear contacts any time I’m not at home. Take better care of my teeth. Get in better shape. Pursue financial independence and keep a budget. Get better grades and get at least a 3.0 from that point out. After I returned to Auburn that fall, I looked over my Personal Initiatives from time to time. And it occurs to me what a good motivation this was for me. As evidenced, my near term goals in many of my initiatives I achieved within the next 3 years. I never earned less than a 3.0 after that fall. I was financially independent in 2004. I’m in better shape now than I was. Not only that, but my plans gave me goals. Even the arcane ones (“wash your face every day”) gave me little things that I could do to feel like I had accomplished something every day. Not every goal had to be in outer space - I could accomplish 5 things just by walking out the door each morning. Of course, some of them I completely blew too. There were a lot of entries about future planning that involved me becoming a pilot. Some other entries concern wanting to have a family (not there just yet…). But overall, I would say my success rate for my personal initiatives in 2002 to today is probably close to 75%. The reason I’m thinking about this is that I kind of feel a bit like did in the summer of 2002. Lost. Listless. Unsure of what I want in my life but unhappy with where I am. And without a plan. Every day I get up and go to the same job and do the same things I’ve done for the last five years. Then I go home and do the same thing each night. The cycle usually never varies. Now, to be sure, my life is much better than it was in 2002. I’m married, a homeowner, active in my community. But that seem creeping, nagging unhappiness is still there. Unfortunately, I don’t have the luxury of taking an entire summer off to work and reflect on my life. But I’m seriously thinking that it might be time to write down some more personal initiatives. Having passed 30 now, I can’t help but feel that I’ve entered a new stage of my life and, if I don’t want to spend this entire decade listless and unhappy, that I have to begin to plan some things out and set some goals for myself. Yes. I think it’s time for some more Personal Initiatives.
Read More
dystill

dystill 0.2.1 released

Just a little announcement about a maintenance release to dystill. 0.2.1 has been released, which brings with it a couple of bugfixes for issues I ran into recently. First, it will now optionally try to create new maildirs when they don’t exist (this is configurable in the config file). There’s also some more error checking to hopefully prevent crazy behavior. As always, the source is on github.
Read More
PHP

PHP, methods, functions, and the global scope

It’s funny. Even after nearly 10 years with the language, there are still little gotchas that sometimes get me. I ran across one today. Say you have two objects, and the look like this: <?php class ObjA { public static function test() { global $test; var_dump($test); } } class ObjB { public static function test() { $test = 1; ObjA::test(); } } ObjB::test(); ?> It doesn’t work. You get NULL. Say I were to do something like this: <?php class ObjB { public static function test() { $test = 1; global $test; var_dump($test); } } ObjB::test(); ?> You also get NULL. And this: <?php function a() { $test = 1; b(); } function b() { global $test; var_dump($test); } a(); ?> Also fails. The reason is that the global scope on PHP is just that: global. Any time you’re in a function or method, you’re in a local scope and all local scopes are independent of each other. So you can’t global in something from one local scope to another. Variables are either global or local. That much I get and makes sense (and is in the documentation). What threw me for a loop was that PHP won’t copy something into the global scope from a local scope that is already defined **and will happily overwrite your local scope with a null value from the global scope if one doesn’t exist in the global scope, **in the process of creating the variable in the global scope. If you want a variable in the local scope to be global, you have to declare it as global before you write a value to it. Or, to put it another way: <?php class ObjA { public static function test() { global $test; var_dump($test); } } class ObjB { public static function test() { global $test; $test = 1; ObjA::test(); } } ObjB::test(); ?> Works beautifully.
Read More
Apple

App Store Entitlements, and the Crippling of an App

A few months ago, I decided I wanted to try exploring the Mac App Store ecosystem as a developer. I’ve been writing little Objective-C apps for myself for awhile, and I decided I wanted to see what it was like from the other side. So I wrote this little app called Airplane Setting. It was a stupid simple little app that made it easy to turn off your radios with a single action. I wrote the app and paid my $99 admission fee. And after a month of back and fourth with Apple and a couple of rejections for what I consider to be dubious reasons as best (especially seeing as how I could point out existing apps in the store that broke the “rule” they said my app was breaking, but whatever, their store, their rules…), my little App was finally approved for sale. It did moderately well, passing 1,000 downloads with virtually no advertising from me. I had big dreams for this little app. Plugins, global hotkey support, localization, Applescript support, and more potential functionality. But all that was dashed by “Entitlements” and Apple’s requirement that all apps must be sandboxed. Look, in theory, the idea of sandboxing an app is not bad. The problem here is Apple’s all-or-nothing approach to sandboxing. The selection of entitlements are just so limited as to be nearly useless for anyone creating a unique, new or complex app - especially one that requires hardware access. Your choice is either to sandbox your app, choosing from the available selection of entitlements, or not sandbox it and not be in the Mac App Store at all starting in March. There’s no reason to only provide such a limited subset of functionality that a developer must choose from. Would it not be better to provide us a wider set of entitlements and allow us to justify our reasons for needing them when we submit our app? The reason Apple gives for requiring sandboxing is to prevent “rogue apps” from destabilizing the system. But when you consider that the App Store itself is curated, this requirement makes even less sense. If Apple is curating the store, how does a “rogue app” end up in the App Store? I’m a huge Apple fanboy. I have almost all Apple hardware in my house, from my iMac to my Macbook Pros, to my iPad and iPhone and my wife’s iPod Touch. I had AppleTVs before they were cool (and there’s one on every one of my TVs now). I love Apple. But as an developer … I [expletive] hate Apple for this “innovation” that crippled my once-promising little app. So, at this point, my options are: Leave Airplane Setting in the App Store. Doing so will mean no further updates so I’ll likely cease development. Remove Airplane Setting from the App Store and start distributing it exclusively from the website. My original intent with Airplane Setting was to explore what it was like to be an App Store developer. I guess … now I know what it’s like to be an App Store developer, and living in constant fear of Apple as a sword of damocles hanging over your head.
Read More

2011

News

Goodbye GoDaddy

Using GoDaddy as my registrar is one of those things I’ve always felt vaguely ashamed of. Something I knew all the “cool kids” didn’t do, but I was already so neck-deep in them that I didn’t want to transfer. Not to mention I had my DNS hosted with them as well so the thought of going through all that trouble to move just seemed like too much of a hassle to deal with without good reason. In my last entry, I talked about setting up your own DNS server. This was the first part of my attack on moving my domains away from GoDaddy. But I didn’t have a real timeline to move away from them. Then came the news of GoDaddy’s support for SOPA - one of the worst attacks on the Internet since 1996’s Communications Decency Act. Now, to be sure, GoDaddy’s position on SOPA was not the first thing they’ve done to anger me. Their overtly misogynistic advertising has always bothered me, and their CEO Bob Parsons’ elephant killing and shameless exploitation of the natives angered me so badly that I almost left in April. But their aggressive support of SOPA was the final straw for me. I’d been a customer since 2003, but I simply could not take it anymore. So over the course of about 4 days, I transferred all my domains to Namecheap. Having never transferred a domain before, the process was surprisingly quick and easy. Once again, it makes me wonder why I haven’t done it sooner.
Read More
Linux

The Stupid Simple Guide to Setting Up Your Own DNS Server

I’m a developer, first and foremost. I like writing code. To me, maintaining servers, configuring things, troubleshooting network issues and the like -  these are things I do to support my primary interest and job as a developer. I’m not ignorant of these things, but all things considered they’re not my favorite things to do. One thing I will admit I’ve been ignorant over the years is DNS. Oh sure, I know at a high level how it works. I even know a bit about the different record types. I knew enough to have my own domain name, configured using Godaddy’s DNS servers to point to my server. But actually running my own name server? Something I’ve never done and, for some reason, had this unnatural fear of. Well, no more. I’m now running my very own shiny new name server and, actually, it wasn’t really as difficult as I thought. And because this was a learning experience for me, I figured I’d walk you through what I did as well. Picking  a Server There are two big players in the “DNS Server software” space: BIND and djbdns. BIND is the 900 pound gorilla that has been around forever and ever, and is insanely difficult to configure. djbdns is from the same guy who wrote qmail - I’ll let you be the judge of that. But after researching and actually attempting to install both of these, I eventually gave up. Both just came across as being too complex for a simple name server handling a couple of domains, and the documentation for both was equally complex. That’s when someone on Twitter pointed me to MaraDNS. I looked it over and was surprised to find good, readable and simple documentation that made it look easy to install. So I decided to give it a whirl. Here’s what I did. Note that this install is for a Gentoo system. Yours will be different if you’re using something else. Installing and Configuring MaraDNS First step is to install it. emerge maradns And let Portage do its thing. Once it’s installed, you really only have to worry about a few files. In /etc/mararc, you need to check to be sure you’re binding to the right interfaces. In my config, I bound it to the loopback and to the main interface: ipv4_bind_addresses = "x.x.x.x, 127.0.0." After that, you tell it to be authoritative, and what domains you are wanting to serve records for. csv2 = {} csv2["robpeck.com."] = "zones/robpeck.com" Note the period at the end of the domain name - it’s important. Each entry in the csv2 array should map to a zone file. I put mine in the “zones” subdirectory (which, in Gentoo, lives under /etc/maradns). mkdir -p /etc/maradns/zones Then, with your favorite editor (which should be vi :P), you create your zone file. The one for robpeck.com (partially) looks like this: robpeck.com. NS ns1.epsilonthree.com. robpeck.com. NS ns2.epsilonthree.com. robpeck.com. +3600 A x.x.x.x robpeck.com. +3600 MX 0 robpeck.com. www.robpeck.com. +3600 CNAME robpeck.com. So what are we doing here? Well, here it helps to know something about the different types of DNS records. I’m not going to cover all the different types of records - this is a good list of common ones and Wikipedia has a full list. The important ones you need to know are NS (Name Server), A (the main record), MX (mail server records), and CNAME (alias). The “+3600” is setting a timeout on the records to one hour (3,600 seconds). By default, the server will send one day (86,400 seconds). Here, I’m telling the server what the name servers are (strictly speaking, this isn’t required, but I added it all the same) and that the main address for people requesting “robpeck.com” is this IP address. I’m also saying that people who request “www.robpeck.com” should get the IP address for “robpeck.com.” I also add an MX record that points to robpeck.com with 0 as the priority (the first (and only) server). That’s it! Restart MaraDNS: /etc/init.d/maradns restart And you can test it out. dig @localhost robpeck.com A You should get a big long printout, but what you want to see is these two lines: QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 robpeck.com. 3600 IN A x.x.x.x Assuming the above is the correct address, congratulations, your DNS server is now resolving properly locally. Delegating your Domain The next step is delegating your domain to your own server. I’m not going to cover this in too much detail because how it happens depends on the registrar. In general, this is a two step process: Register your name server’s IP address to a name. At NameCheap, when you’re in the domain screen this is done under Advanced Options > Nameserver Registration. Under GoDaddy, this is under the “Hosts” section of the domain information screen. You need to add at least two “nsX.domain.com” entries, but they can both point to the same IP. Delegate your domain to the names you just created. At NameCheap, you would go General > Domain Name Server setup, and Specify Custom DNS Servers. Then, enter the two (or more) names you just created “nsX.domain.com”. I can’t remember how I did this in GoDaddy, but I remember it was pretty apparent. That’s it! They say it takes 24-48 hours, but I started seeing requests hit the new name server within about an hour. Of course, since I wasn’t actually changing IP addresses, there was no real downtime. As of now, all my domains are being served off my own nameserver. It’s kind of a neat feeling of accomplishment, knowing you’re not relying on someone else’s DNS setup - they’re just providing you a name. This makes domain transferring much easier and adding new records much easier. And seeing as how I’m currently in the process of transferring all my domains away from GoDaddy, this will ease the transition.
Read More
Apple

Goodbye, Eclipse.

Dear Eclipse, We’ve known each other a long time, haven’t we? I remember when we first met. It was way back in 2005, two jobs ago when I was working at interactive Point of View. I was still a young, naive kid, just out of college. At the time I was just getting my start writing serious PHP code, and you were a breath of fresh air compared to what I had been using before (Dreamweaver). You seduced me with your awesome power and functionality. I used to love being able to have code on top and a browser window underneath. Ironically enough, one of my favorite features would eventually be something I couldn’t care the slightest about. Later that year I would move on to Asteria, and I took you with me. This was the first time I had two monitors on my desk, and I kept Eclipse in one, and a browser in the other while programming. Again, your raw power made complex tasks easy. I discovered Subversion integration, which made Tortoise (I was still on Windows at the time) irrelevant to me. Your Subversion tools turned me into a huge fan. When I moved jobs again, to dealnews, I again took you with me. Much to the chagrin of my coworkers, I preached the gospel of Eclipse. When I first started I was still in the Windows environment and my setup was much like it was at Asteria. Later that year when I switched to Mac, I again took you with me. You occupied a place of honor in my dock. We upgraded together. Through Callisto, Europa, Ganymede, Galileo, Helios and Indigo. We upgraded through Leopard, Snow Leopard and Lion together. Sure, we had our occasional disagreements and outright fights. I remember one time when you would absolutely choke on the size of dealnews’ code tree. I would try other editors and IDEs. I tried jEdit, Coda and TextMate. But I always came back to you. But all things change, and this time I think we’re finally through together. The first sign you were no longer interested in me was the dropping of the official PHP build - the one I had been using for years. But you knew I was worried - you even said so on your website and pointed me to PDT - PHP Development Tools. This aphrodisiac, you told me, would make our relationship just like we were kids again. But what you didn’t tell me was that PDT would make you crazy and unstable in the worst kind of way. Your behavior has become increasingly erratic whenever you take PDT. You developed bugs, including ones that I could no longer justify. Ones that were literally costing me time every day. You said PDT could auto-complete code and when it does it works great. But when it doesn’t, the display glitches up the file so badly that the only way to get back into a usable state is to close the file and reopen. Now imagine doing this four or five times for every file you’re editing, every time you try to auto-complete some HTML. Your ill tempered behavior is costing me time and money. I tried to talk with you about it, but all you could say was NullPointerException. So, I’ve thought a lot about this. It’s been a good six year run, but I think it’s time we ended our relationship together. The truth is that I know about your other boyfriend, too. I know his name is Android, and I know you guys have been spending a lot of time together. And I’m okay with it. Really. All things change and we all have to adapt. The truth is I’ve been fooling around some with your cousin Netbeans, and I think we’re really hitting it off. In many ways, she reminds me of you. The difference is, Netbeans has herself together, is trying hard to improve herself and hasn’t forgotten who her friends are, instead of getting strung out on PDT and spending all her time hanging out in the backseat of Android’s Pinto. So goodbye, Eclipse. What we had was wonderful while it lasted and I’ll always treasure our time together and the memories we made. I hope your new life works out. Maybe we’ll see each other from time to time, but I honestly I don’t think that would be fair to Netbeans. She’s my new IDE now. -Rob Peck Eclipse User, 2005-2011
Read More
Ramblings

On the death of Moammar Gadhafi

Revolutions are a dirty business in every country they occur in. Libya is no exception. Moammar Gadhafi was a terrible human being. He was a thorn in the side of 5 US Presidents. He actually did pursue a WMD program, only giving it up in 2006. Under him, Libya was a state sponsor of terrorism and was behind the destruction of Pan Am flight 103. At home, he brutally repressed his population and ruled with an iron fist. In short, he was not a nice person. According to a report on the BBC that I was listening to on the drive home, he was found hiding in a drain pipe in Sirte. He begged for his life, and was shot in the abdomen and again in the leg. Supposedly he “died” en route to a hospital, but at least one report mentioned an execution-style shot to the head. Now, is this what happened? There’s no way to know, and I suspect we never will for sure. And yet, I find myself having some difficulty taking any “joy” in his death, especially a death such as that. No man who begs for his life should be killed, especially without due process. I would much rather have seen him turned over to ICJ authorities and tried for crimes against humanity. And yet, it may have been the best possible outcome. Libya is not a stable country at this time and probably lacks the facilities to hold Gadhafi securely. There would be months of negotiations that would have to happen before he even got to a trial. All that time, his continued defiant existence would continue to empower his dwindling base. With him out of the picture, the rebels / new government should have little problem establishing itself as the new legitimate government of Libya, thus drawing a close to this whole thing even faster. Revolutions are a dirty business.
Read More
Ramblings

RIP Dennis Ritchie

Unlike Steve Jobs, unless you’re in the tech industry, there’s a pretty fair chance you’ve never heard of Dennis Ritchie.
Read More
Ramblings

RIP Steve Jobs

Normally, I’m not one to be too taken with the death of a “celebrity” … … but this one hurts. We truly lost a titan of our generation. A man who became synonymous with the company he founded, and whose products made life more awesome for millions of people. As I look around my house, pretty much every room has some touch of Apple, and all of that thanks to Steve. Rest in Peace, Steve Jobs. The world was enriched by your presence and is saddened with your loss. Thank you for everything you did.
Read More
Apple

Mac Oil Price Widget Redux

I’m aware that the Mac oil price widget has quit functioning, and I’m aware of the cause as well. I’m working towards a more robust solution and should have something in the next week or so.
Read More
Ramblings

The end of the Shuttle Program

So with the landing of Atlantis, the end of the Space Shuttle program has finally come. And while it is bittersweet - I remember being able to watch the shuttle go up from my backyard (literally!) when we lived in Florida - you have to excuse me for slaughtering the sacred Huntsville cow. The retirement of the Space Shuttle is long overdue.
Read More
Apple

Mac Oil Price Widget

Because there doesn’t seem to be a good, simple way to track oil prices on the Mac dashboard anymore since the previous widget I used quit working, I whipped up a quick little widget that allows me to monitor the price of Crude Oil on the New York Mercantile Exchange. You can download it over on its own page.
Read More
PHP

PHP Filtering: Validation, Sanitizing and Flags

PHP’s filter functions are really, really great. I’ve started using them almost any time I need to get input from a user and, personally, I don’t think you should use the old $_GET, $_POST unless you know what you are doing and there is some specific thing you’re trying to accomplish that you can’t with filter. Filter forces you to think carefully about what inputs your script takes and what format it takes them in. But there are also some behaviors of filter that can bite you in the rear if you aren’t really careful. One of these is knowing which flags you need to pass and what the difference between validation and sanitizing, when is the right time to  use each, and what flags to use. I ran into a good example of this today where I messed it up. I had configured filter_input_array to pull in a variable as FILTER_VALIDATE_FLOAT, probably because I wasn’t thinking like a user and instead was thinking like a developer. I’m the type of person that, when a form wants to know my phone number, I only enter 10 digits without parentheses or dashes. But users are different. They like friendly things. In this case, the user was entering “16,473.54” and the like into that box. Now, I can look at that and say, “yeah, that’s a float” (actually, it’s currency). It should be considered a valid value. But FILTER_VALIDATE_FLOAT will throw this out because it has a comma in it, unless you pass FILTER_FLAG_ALLOW_THOUSAND. Then, and only then, does it return the above as a valid value (in this case “16473.54”). But I looked at the code again. In this case, the value doesn’t need to be there except in a specific case, which I handled in error checking in the code, so I switched it to a Sanitize value instead. It’s probably a good idea to only use  FILTER_VALIDATE_* functions when your user has to give you a valid value and your script would fail if that wasn’t the case. If a validation returns false, you should fail the process and return a (nice) error message to the user. Sanitize functions allow you to accept a little wider range of data and still return a valid value from it. The docs have a great example of this involving email addresses. So if you’re writing PHP these days, definitely use filter. Just be careful and mind the flags and the difference between validation and sanitizing.
Read More
dystill

A new home for dystill and a Roundcube plugin!

I’ve finally put together a website for dystill: www.dystill.org I’ll continue to post updates about this project here, though. I’ve also finished working on a plugin for Roundcube (the popular open-source webmail client). It can be found for download at the address above.
Read More
dystill

dystill 0.2 released

Version (do those really matter anymore? :P) 0.2 of dystill has been released. This version brings a significant change to dystill. Namely, it breaks the unofficial association between dystill and Postfix that has existed since I first wrote it last year. I did this for a couple of reasons: To hopefully increase adoption. Dystill now (really!) stands independent of any MTA. Use it with whatever you want (sendmail, Qmail, etc). You actually always could, but you’d have to ape some Postfix tables. You don’t have to do that anymore. To make it easier to write web-based front-ends to dystill’s MySQL database, enabling users to add rules. This was done by adding an “email” column to the filters table, updating that field with the recipient address, and dropping the old user_id field. Also, a “maildir_path” config variable was added to the config, specifying where the maildirs live. There was also a minor bugfix I came across the other day where certain uncommon (but legal) characters could result in unreadable maildirs.
Read More
Linux

Do Version Numbers Matter?

The recent announcement by Linus Torvalds that the next release of Linux will be 3.0 has provoked rather furious discussion around the Internets about whether or not the incrementing of the version number is warranted. Linus himself has said that “absolutely nothing” has changed. “It will get released close enough to the 20-year mark, which is excuse enough for me, although honestly, the real reason is just that I can no longer comfortably count as high as 40.” This got me to thinking about the nature of version numbers. Once upon a time (when versions were driven more by engineers and convention, and less by marketing), a version number meant something. Major, minor, revision. A major new release that modified significant portions of the code from the previous release incremented a major version number. Version numbers less than 0 were beta releases. Linux has been at 2.x since 1996, and at 2.6.x since 2003. Mac OS has been at 10.x since 2001 (even though the current version of OS X is significantly different from the original release in 2001). Meanwhile, Google Chrome has blasted through major 11 “versions” in three years. Mozilla is planning to release versions 5, 6, and 7 of Firefox this year. You can’t tell me that they are going to change major parts of Firefox three times this year. In this case, version numbers are purely being driven by marketing. They need to “catch up” to Chrome and Internet Explorer. But we live in a different world now. One where, arguably, version numbers are becoming less and less important. The growth of “app stores,” I think, is desensitizing your average user to a version number. While apps in the app store still have versions, I couldn’t tell you what “version” any of the apps on my iPhone are (other than the OS), and I bet you can’t either. Any of the apps I’ve installed from the Mac App Store I could not tell you the version of them. I just know that, when I see the number on the icon, I know I need to do updates. The updates happen, and I get a new version with whatever new features are there (or, in the case of the Twitter app, whatever features have been removed). Then there are web apps which are versionless. What version of Gmail do you use? You don’t. You use Gmail. Sure, there’s probably a revision number or something in the background, but the user has no clue what version they’re using. And they don’t need to, because there’s no action they need to take. So version are numbered in a wide variety of ways depending on the product and overall seem to be becoming less important as the growth of broadband, “app stores,” web apps, and automatic updates make thinking about version numbers less important. So why does it matter if Linus ups Linux to 3.0? Ultimately, it’s just a number.
Read More
MySQL

MySQL mathematical operations and NULL values

So I came across an interesting quirk in MySQL the other day. Let’s say you have a table schema and some values that look like this: +-------------------+------------------+------+-----+---------+-------+ | Field             | Type             | Null | Key | Default | Extra | +-------------------+------------------+------+-----+---------+-------+ | page_id       | varchar(30)      | YES  |     | NULL    |       | | clicks            | int(10) unsigned | YES  |     | NULL    |       | +-------------------+------------------+------+-----+---------+-------+ +---------+--------+ | page_id | clicks | +---------+--------+ | 1 | NULL | +---------+--------+ And then let’s say you pass the following SQL statement to MySQL: update page_click_count set clicks = clicks + 1 where page_id=1; If you come from a loosely-typed language such as PHP, you would probably expect clicks for page_id 1 to now be 1. But that’s not the case in MySQL. After the query is run, the table will still look like this: +---------+--------+ | page_id | clicks | +---------+--------+ | 1 | NULL | +---------+--------+ Not only does the query fail, but it fails with no warnings given. It appears that mathematical operations on null values silently fail. There are a couple of ways around this. The first and most obvious is to set NOT NULL and a default value on the column. In the example above, this would work. The NULL value in that field becomes a 0 and you can to normal mathematical operations on it. But what happens if, for whatever reason, you can’t do that? We actually have this situation in a few places at dealnews, where NULL represents a distinct value of that field that is different from 0. In this case, you can use COALESCE() to fill in the appropriate value for the field. update page_click_count set clicks = coalesce(clicks, 0) + 1 where page_id=1; Edit: Brian Moon informs me that this is actually part of the SQL specification. So hooray for specifications. Still, it’s kind of arcane; in working with MySQL (and PHP) for a decade now, this is the first time I’ve ever actually encountered this. Hopefully this helps someone who was as confused as I was.
Read More
Ramblings

April Showers

I’m looking out my window right now at my neighbors. They have their kitchen lights on. Inside my house we have light - plenty enough light to light the whole room. And yet I still find it difficult to wrap my head around what just happened.
Read More
PHP

Interview Questions for Programmers

Over the years, I’ve seen a number of blog posts relating to common questions that should be asked of programmers. Obviously, this is going to depend on exactly what position you are hiring for, but there are some good “gateway” questions that can be used to determine whether or not an applicant you are interviewing can … well … even program at all. If they even have the mindset that makes a good developer. A common one I’ve seen tossed around is Fizz Buzz. The challenge goes something like this: Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. Now, to anyone who even has a basic understanding of programming, this is super simple to solve using a modulus operator. But apparently many people applying for even entry-level development jobs cannot solve this problem. According to the article linked above, even one “senior developer” took 15 minutes to solve this problem. Earlier today, a friend posted something on Facebook that inspired what I think it another good, intermediate to difficult level programming question that also looks for pattern recognition. The relevant part of the post began by stating: “This year July has 5 Fridays 5 Saturdays and 5 Sundays.” There is the question! It would go something like this: The month of July 2011 has 5 Fridays, 5 Saturdays and 5 Sundays. Calculate the next 50 times there will be a month that has 5 Fridays, 5 Saturdays and 5 Sundays. Woah, so how to go about solving this problem? Well, look at a picture of July 2011. Notice something interesting about this month in relation to the question? This month has 31 days (the most any month can have), begins on a Friday and ends on a Sunday. And that’s the solution! It’s any month with 31 days that begins on a Friday! With this in mind, it’s pretty easy to come up with a PHP solution: <?php $count = 0; $num_found = array(); while(count($num_found) < 50) { $count++; $ts = strtotime("$count months"); if(date("t", $ts) == 31 && date("N", strtotime(date("Y-m-01", $ts))) == 5) { $num_found[] = date("F Y", $ts); } } print_r($num_found); ?> Note that I make use of PHP’s strtotime function, because it is the Swiss Army Knife of date manipulation in PHP. This would need to be adapted for use in another language. So now tell me: what are some other questions you’ve been asked or asked in an interview?
Read More
Apple

Xcode 4

So today, out of nowhere, Xcode 4 finally landed as an official release. After seemingly forever in beta, and me quipping more than once about it’s similarity to Duke Nukem Forever, Apple finally pulled the trigger and released it. But something changed. Xcode now has a price. And that has left me, as both a Mac user and a Mac developer, with a lot of questions. It’s either $4.99 if you’re not a registered, paid Apple developer, or free if you are a registered, paid Apple developer (with all its $99 per year price tag glory). Supposedly there’s some crazy accounting reason that they have to charge for it. This, of course, leaves open the possibility that Xcode will soon be free again once OS X 10.7 arrives. But, it also leaves open the possibility that Xcode will no longer be distributed with OS X and will always have a price tag. It may not even stay $4.99. It may be $49.99 or $499.99. There are additional questions, too. Does this mean that Apple is still distributing Xcode as a bundle with GNU GCC? Because there are things (such as MacPorts) that rely on the underlying foundation provided by the developer bundle that don’t actually use Xcode. Before, those were completely free. Now, they cost $4.99 unless they have split the underlying compiler from the IDE. And if they are still distributing it with GCC, that leads to all kinds of crazy interesting licensing questions. But I think the worst part is that there is now a barrier to entry, however low, to being a developer on a platform that is already a minority in market share. I can’t understand how Apple potentially believes that it is good and right to trade short term profits for long term growth in the number of potential developers. For the future of the Mac platform, I sure hope this isn’t their line of reasoning. So, let me tell you a little story. My first dabbling in programming came courtesy of QuickBASIC back in the MS-DOS and Windows 3.1 days. This was the late 80s or early 90s, so I would have been 10 or 11 at the time. I stumbled across the Qbasic environment included with MS-DOS by accident and found Nibbles. And, after playing it, I discovered that I could change things by making changes to the strange text presented on the screen. I could change colors and speeds. But it would be a couple of years before I really understood what I was doing. When Windows 95 came out (and along with it, Visual Basic 4), I talked my parents into getting me a copy. I don’t remember how much it cost but it was probably a lot because it was one of the few Christmas presents I got that year. But boy did I run with it. I’ve periodically felt guilty over that expense because I didn’t actually make anything really useful with it, but it was instrumental in furthering my education. Now I could do things on my computer far beyond what poor ol’ Qbasic was capable of. So I wrote lots of silly little programs. I put together a “family newsletter” one year that was installed and ran as a piece of software. I was pretty proud of that. I even wrote some software for my high school as part of a software development and AP Computer Science courses. Eventually, I would move on to other things. Other versions of Visual Basic, Java, C, a brief foray into LISP and Forth-based languages for programming MUDs, and eventually web programming. First in Perl, then in PHP. I even landed my first paying programming job while still in high school, writing applications for a local transit contractor. At first, these were Visual Basic applications. But by the time I left (August of 2000) everything was going to the web and so were we. But I can trace everything - my entire career, and my consuming passion for software engineering - back to Qbasic and Nibbles. A silly little game about a block snake, and a free development environment included with the operating system. Had I not stumbled on Qbasic and Nibbles, there’s a chance I would never have been a developer. This is not about $4.99. I spend more on coffee in a week than that. My worry is about that 11 year old kid out there somewhere who may never get the opportunity to stumble across Xcode or the sample applications in /Developer and realize the raw power they possess. This is an area where Apple, a company with billions in cash on hand, should be happy to show a loss. It would be to the benefit of their platform, both now and in the future. One of the great benefits of the Mac platform has been it’s low barriers of entry to developers. Sure, one could argue that the hardware is more expensive (and I could counter-argue that, for the quality of the equipment you are getting a bargain), but the development tools have always been freely available online and included with the machine. You could dabble in programming to your heart’s content. Sure, if you want to put something in the app store(s), you had to pay for admission, but there was nothing stopping you from getting all the way to that point, or even distributing your creations on your own. But this new trend of charging for the development tools - even if it is a paltry sum - sends, to me, a worrying signal about the course Apple intends to tread. They’ve now moved the gate from the last step to the first step. It’s a course that Microsoft, as above, once tread. Microsoft? They now give away a version of Visual Studio for free.
Read More
Linux

BASH Quickie: Backing Up MySQL Databases

In some ways, after years of doing programming and scripting, I’m now sort of rediscovering the power of the shell. Tonight, I was working on my server and remembered that I needed to start backing up my MySQL databases (which you do also … right?). So instead of writing a script to do that, with a little research, I was able to come up with a way to: Dump each database to a separate SQL file, with a timestamp. bzip the file. Keep 5 days worth of backups for each database, rotating the oldest backup off. Here’s what I came up with: cd /backup/mysql; for i in $(mysql -BNe 'show databases' -u root -p<password>); do mysqldump -u root -p<password> $i | bzip2 > $i-`date +"%Y%m%d"`.sql.bz2; rm -rf $i-`date -d "-5 day" +"%Y%m%d"`.sql.bz2; done > /dev/null 2>&1 Shoved that in my crontab. Works great. Linux rocks.
Read More
Ramblings

The Revolution Will Be Virtualized

I sit here watching Egyptians partying in Tahrir Square on the Internet. Mostly because Al Jazeera is the only group that hasn’t just totally halfassed the coverage of what has unfolded a half a world away. However, I did flip on CNN to watch some coverage on there. "No dictator, no invader can hold an imprisoned population by force of arms forever. There is no greater power in the universe than the need for freedom. Against that power, governments and tyrants and armies cannot stand." G'kar, Babylon 5 They interviewed several of the protesters and organizers. All of them young - even relative to me, and I ain’t exactly an old guy - and all of them taking the time to actually thank*for making the revolution possible. What were they thanking? Facebook and Twitter. One guy even said he hoped he could meet Mark Zuckerberg and thank him personally. "People with a passion can change the world for the better." Steve Jobs It occurs to me that these are the Digital Natives coming of age and taking power. To these people, the Internet is an integral part of their life, and have no memory a time before using the Internet to communicate. They think nothing of talking to people around the world. They’ve been exposed to worldwide ideas. Social and political borders mean nothing to them. They’re all old ideas. The ideas of their parents. We are just now beginning to grasp the social ramifications of a worldwide network that connects all people. The Internet is, for lack of a better analogy, like a virus that infects the world’s population. People can access the world’s repository of knowledge, and talk with people around the world with minimal effort. They can organize with minimal effort. This communication infects them with ideas of freedom and a desire to communicate. Now, to be sure, the Internet didn’t get out there and protest. The Internet didn’t physically stand in Tahrir Square and chant protests against Mubarak. The Internet didn’t take gunshots for freedom. But the Internet and social media did provide the tools and the framework in which the revolution could be organized. People will always be the ones taking action. But the ability to communicate - quickly, efficiently, and massively, in such a way that was unthinkable twenty years ago - is going to completely reshape the way the world works going forward. Iran was the warmup. Egypt and Tunisia are the warning shots to nations around the world: neglect your people at your own peril. Now, as for Egypt. The optimist in me hopes for a democratic republic. The pessimist in me fears a military dictatorship or, worse, an Islamic dictatorship. I guess we’ll know soon enough.
Read More
Apple

Automatically Setting Adium IM Status with AppleScript

I have more than 20 various IM accounts set up in Adium on my Macintosh. But during the day, the only one I really want to be active is the one I use for work. The remainder, I want to leave logged in, but showing as away with a warning not to bother me unless it is important. But half the time I forget to set all those accounts as away, or I forget to set the work one as available, or some other issue that would arise out of a manual process interferes and too often it doesn’t get done. Enter AppleScript. I whipped up a surprisingly easy AppleScript to do just this: tell application "Adium" go away with message "Working. Please do not disturb unless necessary." go available first account end tell Because the work account is the first one, this makes it easy. It just sets all accounts as away and then sets the work one available. I shove this in my MarcoPolo ruleset to fire when I arrive at the office. The script to reverse the change when I leave is even easier. This is fired when I leave the office: tell application "Adium" to go available
Read More

2010

Ramblings

FlightPrep

Those of you who follow me on Twitter might have noticed me railing against a company called FlightPrep. You may be wondering, what exactly is the big deal? The short of the story is, there were a bunch of websites out there dedicated to flight planning. Some of the best ones (SkyVector, Flyagogo, NACOmatc and, best of all RunwayFinder) allowed you to plot a course overlaying a VFR Chart the same way you would in Google Maps. You could modify your route simply by dragging it about, and click airports along the route to get current weather reports. It was kinda like Google Maps for preflight intelligence. Well, along comes this company called FlightPrep, who decided they weren’t getting rich enough (just ignore the owner’s $500k boat). So they convinced the USPTO to give them a patent on, bluntly, drawing digital lines on a digitized chart. The filed for the patent in 2005 (after a number of the sites above were already online), but used legal sleight-of-hand to get it backdated to 2001. Eventually, after a number of rejections, they were able to find a friendly clerk and were awarded the patent. They then immediately lawyered up and started going after all of these free flight planning websites, many of which were simply hobbies of some pilots who also happened to know how to program. They requested that these sites “license the technology” (what a ludicrous thing to say, being that the sites pre-dated FlightPrep’s patent) or face lawsuits with damage claims of $149 per unique IP per month. So what happened? SkyVector settled and “licensed.” NACOmatic, Flyagogo and RunwayFinder all shut down under threat of lawsuit. They’ve also gone after FlightAware, Jeppesen and the AOPA with no success, so far. It’s pretty clear that, instead of innovating, they’re litigating. Rather than develop some radical new technology, they’re abusing the patent system in an attempt to corner the market. Bluntly, I’m pissed because they robbed me of a tool (RunwayFinder) that I loved and that was highly useful for a student pilot. But, general aviation is a small community, and the backlash against FlightPrep has been a beautiful if small-scale example of what happens when you abuse your target market. Within the course of a week, they’ve become a pariah and the most hated company in general aviation. They had to close off their Facebook page because it was being overrun with people voicing their opinion, and their products are receiving highly negative reviews in all markets. But, while this is all great, it doesn’t bring back RunwayFinder. Even though Dave from RunwayFinder has decided to fight back, he faces a long uphill climb to have this asinine patent thrown out. In the end, it’s just sad. As I said, GA is a small community where nobody is getting rich. We’re all supposed to be on the same team.
Read More
Apache

Automatically Provisioning Polycom Phones

The goal of this project were twofold: To completely eliminate the need for me to touch the phone to provision it. I want to be able to create a profile for it in the database, then simply plug the phone in and let it do the rest. And… To eliminate per-phone physical configuration files stored on the server. The configuration files should be generated on the fly when the phone requests them. So the flow of what happens is this: I create a profile for the phone in the database, then plug the phone in. Phone boots initially, receives server from DHCP option 66. Script on the server hands out the correct provisioning path for that model of phone. Reboots with new provisioning information. Phone boots with new provisioning information, begins downloading update SIP application and BootROM. Reboots. Phone boots again, connects to Asterisk. At this rate, provisioning a phone for a new employee is simply me entering the new extension and MAC address into an admin screen, and giving them the phone. It’s pretty neat. **Note: **there are some areas where this is intentionally vague, as I’ve tried to avoid revealing too much about our private corporate administrative structure. If something here doesn’t make sense or you’re curious, post a comment. I’ll answer as best I can. Creating the initial configs I used the standard download of firmware and configs from Polycom to seed a base directory. This directory, on my server, is /www/asterisk/prov/polycom_ipXXX, where XXX in the phone model. Right now we deploy the IP-330, IP-331 and IP-4000. While right now the IP-330 and IP-331 can use the same firmware and configs, since the IP-330 has been discontinued they will probably diverge sometime in the not too near future. With the base configs in place, this is where mod_rewrite comes into play. I added the following rewrite rules to the Apache configs: RewriteEngine on RewriteRule ^/000000000000\.cfg /index.php RewriteRule /prov/[^/]+/([^/]+)-phone\.cfg /provision.php?mac=$1 [L] RewriteRule /prov/polycom_[^/]+/[^/]+-directory\.xml /prov/polycom_directory.php` RewriteCond %{THE_REQUEST} ^PUT* RewriteRule /prov/[^/]+/([^/]+)\.log /prov/polycom_log.php?file=$1` To understand what these do, you will need to take apart the anatomy of a Polycom boot request. It requests the following files in this order: whichever bootrom.ld image it’s using, [mac-address].cfg if it exists or 000000000000.cfg otherwise, the sip.ld image, [mac-address]-phone.cfg, [mac-address]-web.cfg, and [mac-address]-directory.xml. So, we’re going to rewrite some of these requests to our scripts instead. Generating configs on the fly We’re going to skip the first rewrite rule (we’ll talk about that one in a little bit since it has to do with plug-in auto provisioning). The one we’re concerned with is the next one, which rewrites [mac-address]-phone.cfg requests to our provisioning script. So each request to that file is actually rewritten to provision.php?mac=[mac-address]. Now, in the database, we’re keeping track of what kind of phone it is (an IP-330, IP-331 or IP-4000), so when a request hits the script, we look up in the database what kind of phone we’re dealing with based on the MAC address, and use the variables from the database to fill in a template file containing exactly what that phone needs to configure itself. For example, the base template file for the IP-330 looks something like this: <sip> <userinfo> <server <?php foreach($phone as $key => $p) { ?> voIpProt.server.<?php echo $key+1 ?>.address="<?php echo $p["host"] ?>" voIpProt.server.<?php echo $key+1 ?>.expires="3600" voIpProt.server.<?php echo $key+1 ?>.transport="UDPOnly" <?php } ?> /> <reg <?php foreach($phone as $key => $p) { ?> reg.<?php echo $key+1 ?>.displayName="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.address="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.type="private" reg.<?php echo $key+1 ?>.auth.password="<?php echo $p["secret"] ?>" reg.<?php echo $key+1 ?>.auth.userId="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.label="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.server.1.register="1" reg.<?php echo $key+1 ?>.server.1.address="<?php echo $p["host"] ?>" reg.<?php echo $key+1 ?>.server.1.port="5060" reg.<?php echo $key+1 ?>.server.1.expires="3600" reg.<?php echo $key+1 ?>.server.1.transport="UDPOnly" <?php } ?> /> </userinfo> <tcpIpApp> <sntp tcpIpApp.sntp.address="pool.ntp.org" tcpIpApp.sntp.gmtOffset="<?php echo $tz ?>" /> </tcpIpApp> </sip> The script outputs this when the phone requests it. Voila. Magic configuration from the database. There’s a little bit more to it than this. A lot of the settings custom to the company and shared among the various phones are in a master dealnews.cfg file, and included with each phone (it was added to the 000000000000.cfg file). Now, on to the next rule. Generating the company directory Polycom phones support directories. There’s a way to get this to work with LDAP, but I haven’t tackled that yet. So, for now, we generate those dynamically as well when the phone requests any of its *-directory.xml files. This one’s pretty easy since 1) we don’t allow the endpoints to customize their directories (yet), and 2) because every phone has the same directory. So all of those requests go to a script that outputs the XML structure for the directory: <directory> <item_list> <?php if(!empty($extensions)) { foreach($extensions as $key => $ext) { ?> <item> <fn><?php echo $ext["first_name"]?></fn> <ln><?php echo $ext["last_name"]?></ln> <ct><?php echo $ext["mailbox"]?></ct> </item> <?php } ?> <? } ?> </item_list> </directory> We do this for both the 000000000000-directory.xml and the [mac-address]-directory.xml file because one is requested at initial boot (the 000000000000-directory.xml file is intended to be a “seed” directory), whereas subsequent requests are for the MAC address specific file. Getting the log files Polycoms log, and occasionally the logs are useful for debug purposes. The phones, by default, will try to upload these logs (using PUT requests if you’re provisioning via HTTP like we are). But having the phone fill up a directory full of logs is ungainly. Wouldn’t it be better to parse that into the database, where it can be easily queried? And because the log files have standardized names ([mac-address]-boot/app/flash.log), we know what phone they came from.Well, that’s what the last two rewrite lines do. We rewrite those PUT requests to a PHP script and parse the data off stdin, adding it to the database. A little warning about this. Even at low settings Polycom phones are chatty with their logs. You may want to have some kind of cleaning script to remove log entries over X days old. Passing the initial config via DHCP At this point, we have a working magic configuration. Phones, once configured, fetch dynamically-generated configuration files that are guaranteed to be as up-to-date as possible. Their directories are generated out of the same database, and log files are added back to the same database. It all works well! … except that it still requires me to touch the phone. I’m still required to punch into the keypad the provisioning directory to get it going. That sucks. But there’s a way around that too! By default, Polycom phones out of the box look for a provisioning server on DHCP option 66. If they don’t find this, they will proceed to boot the default profile thats ships with the phone. It’s worth noting that, if you don’t pass it in the form of a fully-qualified URL, it will default to TFTP. But you can pass any format you can add to the phone. if substring(hardware, 1, 3) = 00:04:f2 { option tftp-server-name "http://server.com"; } In this case, what we’ve done is look for a MAC address in Polycom’s space (00:04:f2) and pass it option 66 with our boot server. But, we’re passing the same thing no matter what kind of phone it is! How can we tell them apart, especially since, at this point, we don’t know the MAC address. The first rewrite rule handles part of this for us. When the phone receives the server from option 66 and requests 000000000000.cfg from the root directory, we instead forward it on to our index.php file, which handles the initial configuration. Our script looks at the HTTP_USER_AGENT, which tells us what kind of phone we’re dealing with (they’ll contain strings such as “SPIP_330”, “SPIP_331” or “SSIP_4000”). Using that, we selectively give it an initial configuration that tells it the RIGHT place to look. <?php ob_start(); if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_330")) { include "devices/polycom_ip330_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_331")) { include "devices/polycom_ip331_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SSIP_4000")) { include "devices/polycom_ip4000_initial.php"; } $contents = ob_get_contents(); ob_end_clean(); echo $contents; ?> These files all contain a variation of my previous auto-provisioning configuration config, which tells it the proper directory to look in for phone-specific configuration. Now, all you do is plug the phone in, and everything else just happens. A phone admin’s dream. Keeping things up to date By default, the phones won’t check to see if there’s new config or updated firmware until you tell them to. But his also means that some things, especially directory changes, won’t get picked up with any regularity. A quick change to the configs makes it possible to schedule the phones to look for changes at a certain time: <provisioning prov.polling.enabled="1" prov.polling.mode="abs" prov.polling.period="86400" prov.polling.time="01:00" /> This causes the phones to look for new configs at 1AM each morning and do whatever they have to with them. Conclusions The reason all this is possible is because Polycom’s files are 1) easily manipulatable XML, as opposed to the binary configurations used by other manufacturers, and 2) distributed, so that you only need to actually send what you need set, and the phone can get the rest from the defaults. In practice this all works very well, and cut the time it used to take me to configure a phone from 5-10 minutes to about 30 seconds. Basically, as long as it takes me to get the phone off the shelf and punch the MAC address into the admin GUI I wrote. I don’t even need to take it out of the box!
Read More
Apache

Google Chrome, Mac OS X and Self-Signed SSL Certificates

I’ve been using Google Chrome as my primary browser for the last few months. Sorry, Firefox, but with all the stuff I need to work installed, you’re so slow as to be unusable. Up to and including having to force-quit at the end of the day. Chrome starts and stops quickly But that’s not the purpose of this entry. The purpose is how to live with self-signed SSL certificates and Google Chrome. Let’s say you have a server with a self-signed HTTP SSL certificate. Every time you hit a page, you get a nasty error message. You ignore it once and it’s fine for that browsing session. But when you restart, it’s back. Unlike Firefox, there’s no easy way to say “yes, I know what I’m doing, ignore this.” This is an oversight I wish Chromium would correct, but until they do, we have to hack our way around it. Caveat: these instructions are written for Mac OS X. PC instructions will be slightly different at PCs don’t have a keychain, and Google Chrome (unlike Firefox) uses the system keychain. So here’s how to get Google Chrome to play nicely with your self-signed SSL certificate: On your web server, copy the crt file (in my case, server.crt) over to your Macintosh. I scp'd it to my Desktop for ease of work. ** These directions has been updated. Thanks to Josh below for pointing out a slightly easier way.** In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says “Certificate Information.” Click and drag the image to your desktop. It looks like a little certificate. Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it. Be sure you add the certificate to the System keychain, not the login keychain. Click “Always Trust,” even though this doesn’t seem to do anything. After it has been added, double-click it. You may have to authenticate again. Expand the “Trust” section. “When using this certificate,” set to “Always Trust” That’s it! Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser. This is one thing I hope Google/Chromium fixes soon as it should not be this difficult. Self-signed SSL certificates are used **a lot **in the business world, and there should be an easier way for someone who knows what they are doing to be able to ignore this error than copying certificates around and manually adding them to the system keychain.
Read More
Asterisk

Auto Re-Provisioning Polycom Phones

At dealnews, as I’ve written before, we run Asterisk as our telephone system. I find it to be a pretty good solution to our telecom needs: we have multiple offices and several home-based users. And, for the most part, for hard telephones, we use Polycoms. We run mostly IP-330s, with a couple of IP-4000s and a few new IP-331s. We also have softphones, a couple of PAP2s and a couple of old Grandstreams from our original Asterisk deployment in 2007 that I’m desperately trying to get out of circulation. But it’s mostly Polycoms. Recently, I changed how we were doing provisioning. I’ll write a more in-depth post about this later, but the short of it is that since Polycom phones use XML for their configuration information, we now generate them dynamically instead of creating a configuration file. It’s what I should have done back in 2007 when we bought our first round of Polycoms. But this presented me with a problem: how do I re-provision the older phones - some of which I don’t have easy physical access to (at least that doesn’t involve an airplane ride) - to use the new configuration system? In doing some research, I discovered that Polycom allows you to set, via certain commands, the provisioning server from within a config. With this information, I crafted a custom re-provisioning config that looks like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <deviceSettings> <device device.set="1" device.dhcp.bootSrvUseOpt.set="1" device.dhcp.bootSrvUseOpt="2" device.net.cdpEnabled.set="1" device.net.cdpEnabled="0" device.prov.serverType.set="1" device.prov.serverType="2" device.prov.serverName.set="1" device.prov.serverName="server"/> </deviceSettings> And included it at the top of the 000000000000.cfg file (one of the default files downloaded by each Polycom phone): <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <APPLICATION APP_FILE_PATH="sip.ld" CONFIG_FILES="update.cfg, phone1.cfg, sip.cfg" MISC_FILES="" LOG_FILE_DIRECTORY="" OVERRIDES_DIRECTORY="" CONTACTS_DIRECTORY=""/> Then, using Asterisk, I issue the check-config command: asterisk*CLI> sip notify polycom-check-cfg peer The phone should reboot, pick up its new config, then reboot again with with proper new provisioning information from the new provisioning provider. Next post, I’ll show you how to use PHP and mod_rewrite to eliminate the need for per-phone config files.
Read More
Apple

AppleTV and Me

My home entertainment center is probably second only to my computer(s) inn “things I interact with every day.” Barely a day goes by when I don’t spend a little relaxing time watching TV or movies. I have a Hitachi 1080p 42-inch television, an Onkyo receiver attached to a 5.1 surround sound system (Polk Audio subwoofer and Energy speakers), a DVD player (that rarely gets any use anymore), a VCR (that gets even less use) and a PlayStation 3. But the star, and my single favorite piece of equipment in my living room is my AppleTV. Yup. My AppleTV. You might be asking why I profess love for a device that many people consider to be a failure. After all, the way some people, including some of my coworkers, talk about this device, you’d think it was Battlefield Earth bad. The kind of bad that you ask for your money back after using. The kind of bad that makes you regret waking up that day, and makes you want to drown your sorrows with a pitcher of Natural Ice. And yet I, as an AppleTV owner, am trilled with it. I love it simply because of its typical Apple simplicity: it’s all the best parts of a HTPC without all the bull** that comes with having a HTPC. Powerful enough to be usable, and yet simple enough that my wife - whom I love, but is most definitely not a computer person - can figure it out. It was simple enough to set up that all I had to do was plug it into my TV and get it on the network. And, it integrates incredibly well with the rest of the Apple products in the house. And now, Apple has come out with a new AppleTV, and I could not possibly be more thrilled, because it addresses almost all the issues I had my current AppleTV, and with an upgrade price of $99, it’s a no-brainer. I might buy one for every TV in the house. Let’s go through some of the differences: No onboard storage. I have two AppleTVs. One in the living room - a 160gb model, and one in the bedroom, a 40gb model. You know how much storage space I’m using on them? Zero. Nothing. I stream everything off my iMac upstairs. Sync’ing is slow, and I have way more content than could even fit on the 160gb model. Moreover, streaming from iTunes shares works seamlessly, so there’s really no reason to use the local storage at all. Apple did away with it. No composite. Non-issue. I use all HDMI. The new AppleTV has only three plugs on the back: power, HDMI, and ethernet. Perfect. Movies from the iTunes store are rental-only. I don’t quite agree with this, but it’s not very strong. I never purchased a movie from the iTunes store. But I did rent on more than one occasion, so I don’t foresee this being an issue, especially because of … Netflix support. That’s right. You can stream all the free content on Netflix straight to my AppleTV. This in and of itself is enough reason for me to want to upgrade. In other words, it’s as if Apple fixed the device to exactly reflect how I use my current one. Since Steve Jobs never called me, I can only conclude that there were a lot more people out there using AppleTVs in the way I use mine. Frankly, at this point, the only things that it’s missing that I really wanted were 1080p and Hulu.
Read More
Apple

Ping and Social Overload

Two days ago Apple announced Ping: a social network geared towards music sharing. And a bunch of iPods too. Personally, I was more excited by the new AppleTV (I have two of them and absolutely love them) but more on that later. This is about Ping. My thoughts on **Ping: Apple’s first real attempt at social networking reminds me of Google’s countless attempts to get into the social networking space: they’re like that guy that shows up to the party really late - I mean beyond fashionably late - when the party is already over and everyone else is already drunk and thinking about stumbling across the street to IHOP or Taco Bell. They say they were at the library studying and now they want to go out and drink, but the keg has floated, the bars and liquor stores are already closed and all you want to do is eat a burrito supreme and find some sofa to pass out on. Ping is a good first start, but it has some problems: What is the target here? Am I supposed to follow people or artists or both or what? And what are they supposed to do? All this feels like is Twitter or Facebook + iTunes. The people I’m following can share messages and pictures? Yep. Twitter in iTunes. I can like and share and post comments? Yep. Facebook in iTunes. Why not allow independent artists into the fold? Some of my favorite artists (such as Matthew Ebel - check him out if you love piano rock) are independent. Right now there are like 10 artists you can follow, and that Lady Gaga is one of them makes me want to break something. The only ones on there I’m remotely interested in following is Dave Matthews Band and maybe U2. I can’t access it in any way other than in iTunes. No web access. While this means I can fire it up myself on my computer and laptop, and (currently) on my iPhone via the iTunes application, I can’t check Ping at a friend’s house. I can’t go to the Apple store and check Ping. Everything has to go through iTunes, and this absolutely cripples it. Think that’s overkill? Go to the Apple store and watch for  15 minutes how many people walk in and use one of the computers to check Facebook. I can only “like” and “share” content I purchased from iTunes. I have purchased 58 songs from iTunes over the years, out of 3,621 songs in my library. About 1% of my library is available. If Apple fixes these (and other, more minor) problems, Ping could be really cool. The problem is that these aren’t code fixes. They’re not something they can test and roll out a change for. These are conceptual problems relating to what their idea of Ping is versus the what the rest of the world is going to use it as. The question is, will they be Google and throw this out here, not maintain it and mercifully kill it a year later (a la Google Wave and the impending death of Google Buzz), or will they adapt and change it to better suit the needs of the public? Because that’s the thing about social networking: you have to embrace the users thoughts, opinions, and ideas. It’s a lesson digg just learned the hard way and a lesson that frankly, given Apple’s reputation as wanting to control everything, I don’t see them embracing. As a side note, I will however salute Apple for not giving into Facebook if the rumor is true. Facebook plays fast and loose with people’s information, and I really don’t like how it seems to have become the de facto standard for social network usage (and thus the reason you can comment with your Facebook login). That, and Zuckerberg. I hate that guy. Still, Ping is yet another player in this social networking space. A space that is becoming increasingly full … Social Overload I’m already Facebooked, Myspaced and Twittered. I’m LiveJournaled, Wordpressed, and Youtube’d. I’m Flickr’d, LinkedIn’d, Vimeo’d, Last.fm’d and Gowalla’d. I’m on any number of dozens of message boards and mailing lists that predate “Web 2.0” and the social networking “revolution,” and I follow nearly 100 various blogs and other feeds via RSS. They’re on my desktop, on my laptop, on my tablet and in my phone. And now, apparently, I’m Ping’d as well. Le sigh. Now, to be fair, I don’t check all these sites. I last logged into Myspace about 9 months ago. I last used Gowalla about a year ago. I usually only look at Youtube, Flickr or Vimeo when I need something, and haven’t updated a LiveJournal in about 3 years. But at what point does all this interaction - this social networking - become social overload? Are any of these services adding value to my life? And at what point does a social network - Ping, in this case - simply become yet another thing I have to think about and check? Or will it become yet another service I sign up for, try for awhile and ignore?
Read More
PHP

Diffing, flattening and expanding multidimensional arrays in PHP

PHP has functions that can compute the difference between two arrays built in. The comments sections for those functions are filled with people trying to figure out the best way to do the same thing with multidimensional arrays, and almost all of them are recursive diffing functions that try to walk the tree and do a diff at each level. The problem with this approach are 1) they are unreliable as they usually don’t account for all data types at each level, and 2) they’re slow, due to multiple calls to array_diff at each level of the tree. A better approach, I think, is to flatten a multidimensional array into a single dimension, make a single call to array_diff, then (if needed) expand it back out if you really need the resulting diff to be multidimensional. Lets look at some code. The following recursive function flattens a multidimensional array into a single dimension. <?php function flatten($arr, $base = "", $divider_char = "/") { $ret = array(); if(is_array($arr)) { foreach($arr as $k = $v) { if(is_array($v)) { $tmp_array = flatten($v, $base.$k.$divider_char, $divider_char); $ret = array_merge($ret, $tmp_array); } else { $ret[$base.$k] = $v; } } } return $ret; } ?> The following function (based on this function found here) reinflates the array back up after it’s been deflated. <?php function inflate($arr, $divider_char = "/") { if(!is_array($arr)) { return false; } $split = '/' . preg_quote($divider_char, '/') . '/'; $ret = array(); foreach ($arr as $key = $val) { $parts = preg_split($split, $key, -1, PREG_SPLIT_NO_EMPTY); $leafpart = array_pop($parts); $parent = &$ret; foreach ($parts as $part) { if (!isset($parent[$part])) { $parent[$part] = array(); } elseif (!is_array($parent[$part])) { $parent[$part] = array(); } $parent = &$parent[$part]; } if (empty($parent[$leafpart])) { $parent[$leafpart] = $val; } } return $ret; } ?> Now, with arrays in flat form, it’s easy to use the built-in functions to diff: <?php $arr1_flat = array(); $arr2_flat = array(); $arr1_flat = flatten($arr1); $arr2_flat = flatten($arr2); $ret = array_diff_assoc($arr1_flat, $arr2_flat); $diff = inflate($ret); ?>
Read More
Apple

Hard Drive Upgrade

So Sunday night, my iMac died. been having strange problems the few months leading up to it. Mostly random freezes. I always notice when they happen because I leave Mail.app running all the time to filter my messages, so when my iPhone would start going crazy, I’d know it had crashed again. It actually happened while I was out of town in Atlanta earlier this year, so all weekend my phone was constantly buzzing. Well, Sunday while we were working in the yard, I had set up a DVD rip job - my current project is digitizing all my DVDs for the AppleTV - to run, and while we were working it randomly reset itself and got all sluggish. That night, I tried to boot of the Snow Leopard DVD to run Disk Utility, and it couldn’t even mount the drive and refused to repair it. Couldn’t reboot either. I tried DiskWarrior, and that fixed things up enough to boot it, but it was REALLY SLOW (it took 10 minutes to boot). It was good enough to get the last few remaining files that hadn’t been backed up yet onto the external drive. Then, I tried reinstalling, and it never came back. My conclusion, since I could still boot fine from the DVD, was dead hard drive. The original hard drive was 500GB, but I figured I’d upgrade while doing this. Ordered a new 1TB hard drive via a deal at work and had it overnighted. It arrived yesterday. And, after some interesting surgery (who says you can’t work on Macs!), got it installed, formatted, and Snow Leopard reinstalled. You know, I remember the first computer I owned that crossed the 1GB barrier, back in late 1999. I guess I’ll have to remember this one, too.
Read More
Apple

Scripting iTerm with AppleScript

Every day, when I get to work, there are a number of tasks I do. Among the first thing I do is connect to a number of servers via SSH. These servers - our development testing, staging, and code rolling servers - are part of the development infrastructure at dealnews. So every morning, I launch iTerm, make three sessions and log into the various servers. Over time, I’ve written some helper scripts to make this faster. My “go” script contains the SSH commands (using keys) to log into these machines so that all I have to do is type “go rpeck” to log into my development machine. Still, this morning, the lunacy of every morning having to open iTerm and execute three commands, every day without fail, struck me. Why not script this so that, when my laptop is plugged into the network at work, it automatically launches iTerm and logs me into the relevant services? Fortunately, iTerm exposes a pretty complete set of AppleScript commands, so with a little work, I was able to come up with this: tell application "System Events" set appWasRunning to exists (processes where name is "iTerm") tell application "iTerm" activate if not appWasRunning then terminate the first session of the first terminal end if set myterm to (make new terminal) tell myterm set dev_session to (make new session at the end of sessions) tell dev_session exec command "/Volumes/iDisk/bin/go rpeck" end tell set staging_session to (make new session at the end of sessions) tell staging_session exec command "/Volumes/iDisk/bin/go staging2" end tell set nfs_session to (make new session at the end of sessions) tell nfs_session exec command "/Volumes/iDisk/bin/go nfs" end tell select dev_session end tell end tell end tell What this little script does is, when launched, checks to see if an instance of iTerm is already running. If it is, it just creates a new window, otherwise creates the first window, then connects to the relevant services using my “go” script (which is synchronized across all my Macs by MobileMe). Then, with it saved, I wrap it in a shell script: #!/bin/bash /usr/bin/osascript /Users/peckrob/Scripts/launch-iterm.scpt And launch it with MarcoPolo using my “Work” rule that is executed when my computer arrives at Work. Works great!
Read More
DD-WRT

DD-WRT Hacks, Part 2 - Setting up an OpenVPN Server

In my previous entry, I wrote about how awesome DD-WRT is, and how it had replaced a number of network devices allowing me to reduce the number of machines at home I had to administer. I finished the article by talking about how I’d set up a VPN tunnel to the office so multiple machines - namely, my Macbook Pro and my iMac - could access company resources at the same time. But at the end, I mentioned that PPTP was _not _what I was using to connect myself back to my home network when I’m on the road. But why? Two words: broadcast packets. PPTP, by default, does not support the relaying of broadcast packets across the VPN link.* For Mac users, this means Bonjour/Rendezvous based services such as easily shared computers on a network are not accessible as they rely on network broadcasts to advertise their services. PPTP can support broadcast packets with the help of a program called bcrelay. This program is actually installed on DD-WRT routers even, but does not work even though the DD-WRT web GUI claims that they can support relaying broadcast packets. To verify, you can drop to shell and try yourself: root@Eywa:~# bcrelay bcrelay: pptpd was compiled without support for bcrelay, exiting. run configure --with-bcrelay, make, and install. The version of pptpd that ships with v24sp2 of DD-WRT lacks bcrelay support. It’s important to note that this doesn’t mean the services are completely inaccessible. You can still reach them if you know IP addresses. Good for people with and understanding of networking, but not good for people like my wife and definitely not the “Mac way.” So, what options are left, if no PPTP? Enter OpenVPN OpenVPN is a massively flexible (and therefore massively difficult to configure) open source VPN solution. DD-WRT ships with OpenVPN server available with support for broadcast packets, so that is what I decided to use. A couple of notes before you begin. There are some tradeoffs to using OpenVPN. Perhaps the biggest is that it’s not natively supported on any operating system (unlike PPTP). That means on Windows or Mac, you’ll need a third-party client. And it’s not compatible at all with iPhones, iPods or iPads (unless they’re jailbroken). It is also much more difficult to configure that the relatively easy and reasonably well documented PPTP server setup. It was a worthwile tradeoff for me, but it may not be for you. So, before you begin, you’ll need the following: You have already configured your router using DD-WRT and have the most recent release (as of this writing, v24-sp2), VPN version installed. The version number should be in the upper right corner of the web admin. If it says “std” or “vpn,” you’re in good shape. If it says “micro,” you probably don’t have the necessary tools. You possess some basic understanding of networking, and have the necessary settings to complete a VPN connection. If you’ve gotten as far as flashing with third-party firmware, you probably do. You understand that there is the possibility, albeit remote, that you could brick your router. I am not responsible for that, which is why I suggest you purchase an additional router to get all this set up on first before sacrificing your primary router. You’re not scared of the shell. You must sacrifice a goat to the networking Gods. For reference, my network uses 192.168.1.x for addresses. This can cause problems as it’s incredibly common for LANs. You may want to change your addresses to something less common. Not that big a deal for me, though. I also have mine set up in bridged, as opposed to routed, mode. I thing this is smarter (and easier), but if you’re curious, the difference is explained here. The first thing you need to do is install OpenVPN on your client machine. Even if you intend to use something different, you still need to install it so that you can generate all the certificates you’ll need. On a Mac, I find the best way to do this is with MacPorts. toruk:~ peckrob$ sudo port install openvpn2 It’ll crank for awhile compiling and installing what it needs, so go get a snack. Then, once you have it installed, head over to /opt/local/share/doc/openvpn2/easy-rsa/2.0/ and run the following commands: source ./vars ./clean-all ./build-ca ./build-key-server server ./build-key client1 ./build-dh At each stage, it will ask you questions. It is important to provide consistent answers or you will get errors. Importantly, don’t add passwords to your certificates. Once you are finished, you will find all your keys in the keys/ directory. Now, the fun part. Head over to the keys directory (/opt/local/share/doc/openvpn2/easy-rsa/2.0/keys). There should be a bunch of files in there. In a browser, open up your router’s web admin, and go to Services -> VPN. Under OpenVPN Daemon, next to “Start OpenVPN Daemon,” select “Enable” “Start Type,” set to “WAN Up” CA Cert. Go back to your shell and “cat ca.crt”. Past everything between the “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” including those two lines. You must include the BEGIN and END for this to work on each one! (This was a major trip-up for me). “Public Client Cert,” go back to shell and “cat server.crt”. Past everything between the “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” as above. “Private Client Key,” go back to shell and “cat server.key.” You need everything between “—–BEGIN RSA PRIVATE KEY—–” and “—–END RSA PRIVATE KEY—–” as above. “DH PEM,” go back to shell and “cat dh1024.pem”. You need everything between “—–BEGIN DH PARAMETERS—–” and “—–END DH PARAMETERS—–” as above. The important not above is to include the lines containing “—-whatever—-“. Not doing this cost me about 3 hours of messing around until I figured this out. With that all complete, it’s now time for your server config. Here is my server config: mode server proto tcp port 1194 dev tap0 server-bridge 192.168.1.1 255.255.255.0 192.168.1.201 192.168.1.210 # Gateway (VPN Server) Subnetmask Start-IP End-IP keepalive 10 120 daemon verb 6 client-to-client tls-server dh /tmp/openvpn/dh.pem ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem The important things here are “dev tap0”, which creates an ethernet bridge and not a tunnel (as “dev tun0” would do), and the “server-bridge” line. The documentation for that line is below it. The start IP and end IP specifies an IP range that VPN clients will receive addresses from. With all this complete, press “Save” and “Apply Settings” at the bottom of the screen. Wait patiently. Then, in the web admin, go to Administration -> Commands. If you already have a Startup script, edit it, otherwise, add this to the commands window: openvpn --mktun --dev tap0 brctl addif br0 tap0 ifconfig tap0 0.0.0.0 promisc up Press “Save Startup.” Then, if you already have rules in “Firewall,” edit those, otherwise add: iptables -I INPUT 2 -p tcp --dport 1194 -j ACCEPT Press “Save Firewall.” Now, reboot your router. When it comes back up, you should have a running OpenVPN server. To check, go to Administration -> Commands, and type this into the command window: ps | grep openvpn If you see something that looks like: 11456 root 2720 S openvpn --config /tmp/openvpn/openvpn.conf --route-up 17606 root 932 S grep openvpn Then it worked. Congratulations, you have a working OpenVPN instance. But how to connect to it? If you use Mac, you really have two choices: Tunnelblick or Viscosity. Tunnelblick is a little on the ugly side and difficult to configure, but is free and open source. Viscosity is reasonably pretty to look at and easier to configure, but is a commercial product. I chose Viscosity, so that’s what I’m demonstrating here. Once you have Viscosity downloaded and installed, go to Preferences and Connections, and add a connection. Enter a name and server address. Set the protocol to TCP and the device to tap. Now, before you continue, go back to your shell. Go back to the /opt/local/share/doc/openvpn2/easy-rsa/2.0/keys directory, and copy those keys someplace in your home (~) folder that you’ll be able to access. Back in Viscosity, go to the “Certificates” tab. You should see three lines labeled “CA,” “Cert,” and “Key.” For “CA,” select the “ca.crt” file you just moved. For “Cert,” select “client1.crt”. And, for “Key,” select “client1.key”. Under the “Options” tab, disabled LZO compression. For some reason this was causing a problem for me, so I just disabled it. Click “Save.” If all is right in the Universe and the goat you sacrificed to the Gods (you did do the goat sacrifice step, right?) was pleasing, you should now be able to connect back to your home network. Broadcast packets will work, and everything will be wonderful.
Read More
DD-WRT

DD-WRT Hacks, Part 1 - Setting up a PPTP VPN Endpoint

To celebrate the re-launch of my “blog,” I’m going to do a multi-part entry about DD-WRT. But, first, a little history. For the first time in 10 years, I have no servers running in my house. At one point, I had three servers running in here doing various things. Then, I moved my public server offsite (it’s in the rack at the office now). That left two more Gentoo boxes running here in the house. Late last year I picked up a 1TB external hard drive, which I attached to my iMac and deactivated the file server. I will probably eventually replace this with a Drobo FS, but for now it’s fine. That just left a single Gentoo box that was running Asterisk and various network services. But I finally convinced my wife to let me drop the goofy VoIP line that I was paying $30 for and just add more minutes to her cellphone. With Asterisk out of the picture, the only thing left running on that box was network services. Well, a few weeks ago I ordered a TP-Link TL-WR1043ND router, intending to use it as a testbed for DD-WRT. Well, my experiments worked so well that I pulled my old router out and replaced it with the DD-WRT one. The faster processor also afforded a nice speed bump of about 7 Mb/s. With it handling all the services, I pulled out the final server and deactivated it. And my office is blissfully quiet now. DD-WRT is now handling all the minor network services (DHCP, NTP, etc). But what is it about DD-WRT that makes it so awesome - awesome enough to rip out some of my network infrastructure to make way for it? A few things that I will cover in this post. 1. DHCP static address assignments Believe it or not, the built-in firmware of the WRT-54G did not give you the ability to define a static address to be assigned by DHCP based on MAC address. This seems like a glaring oversight to me, but it was the reason I ran my own DHCP server rather than use the built-in ones. In DD-WRT (v24-sp2) you can go to the Services tab and set as many as you’d like. In my case, these are a couple of devices (like printers) that are addressed via IP address by the various machines, as well as my laptop and iMac. So that’s one nice thing, but it’s not nearly as cool as … 2. VPN Support The standard and VPN versions of DD-WRT support both PPTP and OpenVPN varieties of VPN … and I’m actually using both at the same time. My router is both a VPN server and VPN client as well. How? Why? Well, as to why, at dealnews, we run a PPTP-based VPN to allow us to work at home as needed. Once connected, we have access to our testing servers and all our development services. It’s like being directly connected to the work network, but I’m sitting at my iMac at home in my pajamas. I had been connecting directly from my Macs to the VPN for some time but, sitting at home the other day, I reflected on how silly it was that I was connecting two machines to the VPN and only when I needed them, rather than using DD-WRT to have a single tunnel up all the time that any computer on the home network could use if needed. Setting up a PPTP VPN Endpoint using DD-WRT So how did I set it up? Trial and error, as, frankly, the DD-WRT documentation is a bit lacking. So if you find yourself in my position of wanting to have a tunnel to your workplace VPN, hopefully this documentation will help you. I’m making a few assumptions before we begin: You have already configured your router using DD-WRT and have the most recent release (as of this writing, v24-sp2), VPN version installed. The version number should be in the upper right corner of the web admin. If it says “std” or “vpn,” you’re in good shape. If it says “micro,” you probably don’t have the necessary tools. You possess some basic understanding of networking, and have the necessary settings to complete a VPN connection. If you’ve gotten as far as flashing with third-party firmware, you probably do. You understand that there is the possibility, albeit remote, that you could brick your router. I am not responsible for that, which is why I suggest you purchase an additional router to get all this set up on first before sacrificing your primary router. With that out of the way, let’s begin! Log into your router’s DD-WRT web admin, and go to the Services -> VPN tab. Under PPTPD Client, click the radio button next to Enable. In the “Server IP or DNS Name” box, enter your VPN server. In the “Remote Subnet” box, enter the network address of the remote network. In my case, this was 10.1.2.0. In the “ Remote Subnet Mask” box, enter the remote subnet mask. In my case, this was 255.255.255.0. In the “MPPE Encryption” box, I have “mppe required,no40,no56,stateless”. This was required to get mine to work, but may not be necessary for you. Try first without it, then try with it if it won’t work. Leave the MTU and MRU values alone unless you know what you’re doing. Enable NAT. Username and password are self explanatory. WIth that done, press “Save” and “Apply Settings” at the bottom the page. With any luck, you should now have a VPN tunnel up to your remote host. To test it, go to Administration -> Commands, and in the command box, enter the following: ping -c 1 <some remote address on VPN> If you get a response back that looks like: PING <remote service IP> (<remote service IP>): 56 data bytes 64 bytes from <remote service IP>: seq=0 ttl=64 time=281.288 ms --- <remote service IP> ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 281.288/281.288/281.288 ms Then it’s up and working. Now, try from your computer… Probably didn’t work, did it? This is because your router’s firewall doesn’t yet know about the remote network or to route packets to it appropriately. For some reason, the current version of DD-WRT does not add the appropriate configuration to the firewall automatically when the PPTP tunnel is established. So, we have to do it manually. Go to Administration -> Commands, and enter the following: iptables -I OUTPUT 1 --source 0.0.0.0/0.0.0.0 --destination <remote network address>/16 --jump ACCEPT --out-interface ppp0 iptables -I INPUT 1 --source <remote network address>/16 --destination 0.0.0.0/0.0.0.0 --jump ACCEPT --in-interface ppp0 iptables -I FORWARD 1 --source 0.0.0.0/0.0.0.0 --destination <remote network address>/16 --jump ACCEPT --out-interface ppp0 iptables -I FORWARD 1 --source <remote network address>/16 --destination 0.0.0.0/0.0.0.0 --jump ACCEPT iptables --table nat --append POSTROUTING --out-interface ppp0 --jump MASQUERADE iptables --append FORWARD --protocol tcp --tcp-flags SYN,RST SYN --jump TCPMSS --clamp-mss-to-pmtu At the bottom, press “Run Commands” and wait. It shouldn’t take long, and should produce no output. Then, enter that command again, and press “Save Firewall” at the bottom. Give your router a few seconds to restart the appropriate services, then try again from your computer. Your machine, and all machines on your network, should now be able to access the VPN. In this configuration, only traffic matching the remote network will pass over the VPN - the rest of your traffic will be routed to the Internet in normal fashion. Now, in my next entry, I’ll tell you why I’m not using PPTP to connect myself back to my home network when I’m on the road.
Read More
News

Welcome!

Welcome to the new home for the Code Lemur blog … robpeck.com! I’ve sat on this domain for six years - I don’t know why it took me so long to port my blog from wordpress.com over to here. Nonetheless, it is done now. And hopefully I’ll find time to update it more with musings about my life and adventures writing code in dot-com.
Read More
Conferences

MySQL Conference, Santa Clara, CA

I’ll be attending MySQL Conference in Santa Clara, California this year. This will actually be my first time attending this conference, so I’m looking forward to it. Also, my coworker Brian Moon will be speaking at the conference, “What is memcached and What Does It Do,” so pop in and see him as well!
Read More
Ramblings

Why The Internet Will Fail

Newsweek, in 1995, published an article by Clifford Stoll titled “Hype alert: Why cyberspace isn’t, and will never be, nirvana.” Well, now it’s 15 years later. A relative blink of an eye. Hell, I can remember what I was doing back in 1995 - a kid playing with this newfangled thing called “the Internet,” that very few people understood but some visionaries had the foresight to realize was going to completely change the world. Let’s see some of the areas where Stoll got it absolutely wrong: “The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.” Pretty much every newspaper has some online presence, from the largest New York Times publisher to the smallest hometown O-A News. Every instrument of government is now connected to the Internet, and contacting my representatives is online, making it easier than ever for them to ignore me. He is correct that no CD-ROM will ever replace a teacher. Although we don’t use CD-ROMs anymore. But while all this technology is great, instruction will continue to be the domain of humans for the foreseeable future. However, technology certainly makes instruction easier and more fun. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure. Amazon.com. Barnes and Noble.com. Kindle. Nook. iPad. Buy wirelessly over the air anywhere I am. Then there’s cyberbusiness. We’re promised instant catalog shopping–just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obsolete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month? Even if there were a trustworthy way to send money over the Internet–which there isn’t–the network is missing a most essential ingredient of capitalism: salespeople. Yup. All that has happened. Moreover, I’ve done almost all that in just the last month! I buy all the time online. I haven’t bought an airline ticket any other way than online in years. Last weekend, when we went out to Melting Pot, I made our reservation online. And while stores are not yet obsolete, there are certain times of the year - Christmas - I won’t go anywhere near a brick and mortar establishment. The crowds are terrible. But why should I, when I can do it all online and have it delivered to my door? And the best part? I don’t have to deal with pushy salespeople! I’m not a moron - I know what I want and I can use the gasp Internet to research! Computers and networks isolate us from one another. A network chat line is a limp substitute for meeting friends over coffee. I hear this one all the time, for years. I have one word: Facebook. Right now, thanks to the Internet, I am more connected to the lives of those around me than at any point in my life. And while he is correct that it isn’t a substitute for human contact, my social circle is now larger than at any other time ever. It makes it easier to arrange that human contact Granted, we have luxury of 20/20 hindsight, but when someone talks about something “won’t” happen in the future, you should always think of this. Just because it wasn’t there in February of 1995 doesn’t mean that engineers wouldn’t solve the problems and get there. The surprising thing is that it happened so fast! Moreover, if the innovators of the 90s had listened to luddites like Stoll (and lest you think this piece is ironic, he wrote a book that, no shit irony, is available at Amazon.com) we might not have had the complete information revolution that we’re still living through. So never let anyone tell you you can’t do something. Stick with it, and look forward to seeing egg on their face in 15 years.
Read More
Apple

Synchronized

When you work across multiple devices and multiple computers on a daily basis, keeping the information you expect to be there the same across all of them used to be a monstrous pain. This is where synchronization comes in. I have 3 “computers” I use every day: my iMac, my Macbook Pro, and my iPhone. On each of those computers, I have several programs that may need to access the same type of data. Bookmarks are synchronized using Xmarks. This allows me to sync them across Safari, Google Chrome and Firefox. And because the bookmarks are sync’d to Safari via a background process, I can use Mobileme to sync them to my iPhone. All this happens in the background, without me having to think about it. I just add a bookmark somewhere, and minutes later it’s reflected everywhere else. Email rules, accounts and signatures are synchronized via Mobileme and appear on all my computers and my iPhone. Contacts are sync’d via Mobileme and appear everywhere. Same with calendars, except calendars is the real win. I can make an calendar entry on my iPhone, and it’s instantly sync’d to my calendars on my laptop and desktop. I have some files and programs that I need access to, I sync those with Mobileme across all my devices via iDisk. I can access those everywhere, even on my iPhone. I even created a directory in there called “Scripts;” with a change to my bash path on my Macs, any scripts I write are sync’d too. And all this stuff happens more or less instantly and completely transparently to me. Via the Internet and over the air for the iPhone. I don’t even have to plug anything in. It just happens. I can’t believe computers ever worked any other way, and there is no way I can do without it now. Xmarks is free. Mobileme is $99 a year, but totally worth it simply in the headache I save in not having to deal with disparate data spread over 3 devices.
Read More
Apache

MySQL-based Apache HTTP Authentication for Trac and Subversion

In working on a side project with a few friendly developers, we decided to set up a Subversion repository and a Trac bug and issue tracker. Both of these, in normal setups, rely on HTTP authentication. So, being that we already had an authentication database as part of the project, my natural first thought was to find a way to authenticate Trac and Subversion of these against our existing MySQL authentication database rather than to rely on Apache passwd files that would have to be updated separately. Surprisingly, this was more difficult than it sounded. My first thought was to try mod_auth_mysql. However, from the front page, it looks as if this project has not been updated since 2005 and is likely not being actively maintained. Nonetheless, I gave it a shot and, surprisingly, got it mostly working against Apache 2.2.14. Notice I said “mostly.” It would authenticate about 50% of the time, while filling the Apache error logs with fun things like: [Sat Feb 13 11:11:27 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 [Sat Feb 13 11:11:28 2010] [notice] child pid 19074 exit signal Segmentation fault (11) [Sat Feb 13 11:34:14 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server during query: [Sat Feb 13 11:34:15 2010] [error] [client -.-.-.-] MySQL ERROR: MySQL server has gone away:` Rather than tear into this and try to figure out why a 5-year-old auth module isn’t working against far newer code, and with very little to actually go on, I just concluded that it wasn’t compatible and looked for a different solution. That’s when I came across mod_authnz_external. If your’e not familiar with this module, what it allows you to do is auth against a program or script running on your system, therefore allowing you to auth against anything you want - a script talking to a database, PAM system logins, LDAP, pretty much anything you have access to. All you have to do is write the glue code. In pipe mode, mod_authnz_external uses pwauth format, where it passes the username and password to stdin, each separated with a newline. It uses exit codes to return back to Apache whether or not the login was valid. Knowing that, it’s pretty easy to write a little script to intercept the username/password, run a query, and return the login. #!/usr/bin/php <?php` include "secure_prepend.php"; include "database.php"; $fp=fopen("php://stdin","r"); $username = stream_get_line($fp,1024,"\n"); $password = stream_get_line($fp,1024,"\n"); $sql = "select user_id from users where username='%s' and password='%s' and disabled=0"; $sql = sprintf($sql, $db->escape_string($username), $db->escape_string($password)); $user = $db->get_row($sql); if(!empty($user)) { exit(0); } exit(1); ?> Then, you just hook this into your Apache config for Trac or Subversion: AddExternalAuth auth /path/to/authenticator/script SetExternalAuthMethod auth pipe <Location /> DAV svn SVNPath /path/to/svn AuthName "SVN" AuthType Basic AuthBasicProvider external AuthExternal auth require valid-user </Location> Restart, and it should be all working. Some may argue that the true “right” way to do this is LDAP. But with just three of us, LDAP is overkill, especially when we already have the rest of the database stuf in place. The big advantage to this, even over mod_auth_mysql, is the amount of processing you can do on login. You basically can run any number of queries in your authenticator script - rather than just one. You can update with last login or last commit date, for instance. Or you can join tables for group checking; say you want someone to have access to Trac, but not Subversion. You can do that with this.
Read More

2009

Conferences

OSCON 2009 Summary

Have to say that, everything that didn’t involved air travel (I’ll go ALL into that later) was awesome on this trip. Had a good time and learned some useful things at OSCON, enjoyed good company and had a good time exploring San Jose and the Bay Area in general. OSCON was good this year but not as good as in years’ past. This may be due to the new location, which doesn’t seem as conducive as the Oregon Convention Center did to a conference like this. The OCC was round, and all the meeting rooms were clustered in a central area - there was never more than a short walk between panels. But the San Jose Convention Center is more of a traditional box design, with a single LONG hallway. This means that if you’re in J3 and have to go to B2, good luck, because it’s a 15 minute walk. For a conference like OSCON, this kind of sucks and absolutely kills the “community” feel of it. Also, like many things, it suffers from diminishing returns. Because a lot of this is stuff I’ve seen before, every year that I come, I have to work harder and harder to find something new. Three years ago, I was doing well to decide what not to learn about. So this may be my last OSCON for a few years, though I’m thinking of attending Velocity (held down the road at the Fairmont) next year. I did attend some interesting side panels, including one on home automation. I have some ideas that I’m sure will drive Sarah crazy.
Read More
Ramblings

We Live In The Future

The computer on my desk has one TERABYTE of space, and it’s almost half full - ten years ago I didn’t even had a gigabyte of space in my main machine. I carry a computer … in my pocket … that I can use to surf the web anywhere. No wires. And I can use my pocket computer to show pictures of our vacation half a world away to a friend over a dinner of seafood. We’re hundreds of miles from the coast. And I can use this same device to call around the world at any time. In a few months, I’m going to get on an airplane and fly from my home in Alabama to London. I’m going to FLY. Through the air. And I’m going to be there in a little over 10 hours. A hundred years ago to get from London to New York took like two weeks via steamship, and then you had to travel by train and horse carriage. It could take a month or more to travel that distance, but I’m gonna do it in 10 hours. When I was a kid, our TV got 3 channels on an old 19” TV that took minutes to warm up. My TV today has close to 500 channels, all of them perfectly clear and some of them in beautiful high-definition. Oh yeah, and it’s 42” wide, less than the width of a ream of paper, and turns on almost instantly. I never go to a bank anymore. My paycheck is electronically deposited to a bank that has one physical location … in Texas, more than five hundred miles away. And if for some reason, I do have to deposit a check, I can scan them in at my house and send them by computer to be instantly deposited. People, welcome to the future. We’re here. And it’s just going to get cooler.
Read More
Ramblings

RIP Michael Jackson

I grew up in the 80s, against the backdrop of Michael Jackson’s music. I remember my parents listening to Thriller, Billie Jean, and Beat It. In many ways, Michael Jackson defined music in the 80s.
Read More
Microsoft

Why Bing Sucks

So I see Microsoft’s is attempting to rebrand the old Windows Live Search as bing.com. The commercials on TV are advertising it as a different type of search engine - a “decision engine.” Yeah, when I heard that, I, too, wondered exactly what a “decision engine” was. But the commercials are clever and somewhat funny to anyone who has ever spent time searching through hundreds of results for a single missing piece. But where’s the meat? My coworker Brian, a few weeks ago, provided a great example of how this claim of being a “decision engine” is kind of a joke. And it can be summed up in a single sentence: “How big is the sun?” Maybe now you’re confused about what I’m talking about. What does the sun have to do with search engines? Well, try plugging that sentence, word for word, into your favorite search engine. Our of curiosity, I ran this search on a number of top and up-and-coming engines to see what they returned. Google is obviously the 900-pound gorilla in this space, so they’re a logical place to start. When you ask Google “How big is the Sun?” Big Brother Google replies, right at the top “Mass: 1.9891 ×1030 KG 332 946 Earths,” with most of the results relevant to the question at hand. In fact, all but two of the results were directly relevant to the question asked. Yahoo didn’t return a nice little piece of math like Google did, but all but one of the search results is _directly _relevant to the question asked. The only result that wasn’t relevant was that VH1 has some videos by a band called Big Sun, but that was torwards the bottom of the SERP. The newcomer Wolfram Alpha, which bills itself as a “knowledge engine” gives you a simple result, 432,200 miles, along with a handy formula for conversion. Not a traditional search engine, but closer to a “decision engine” than Bing … And finally, the “decision engine” Bing. So how does the vaunted “decision engine” handle knowing how big the sun is?It doesn’t. The first result is a garden furniture store in Austin, Texas. The second result is an Equine Product Store in Florida. The third was pictures of the sun from the Boston Globe - okay, that one was close. The next results are a realty company in Florida and an athletic conference. Only then, six results down, do we get into the meat of the question. Look, it’s easy to hate on Microsoft. It’s no challenge anymore. I, personally, am not exactly a fan of Microsoft, but I’m hardly an enemy either. At worst, I’m indifferent. And, as an aside, I really feel sorry for the poor guy they send to the OSCON keynote every year who literally gets hammered for no good reason by what can only be described as nerd rage from the questioners. And yet every year, they come back with more money and more people. I almost posted an entry about it last year. It was really kind of sad to watch. Anyways, the point is, there are some things that Microsoft _has _done well. Office? Great productivity suite. Windows 7? From what I’ve seen, it looks pretty good. The XBOX and gaming units at Microsoft do gangbusters. But it just seems like they’re irrationally pursuing this search thing, out of spite, at this point to the detriment of the rest of their business. Considering that bing doesn’t appear, at the surface, to be any different from Windows Live Search in terms of its usefulness (that is to say, not), Microsoft is throwing tons of money in the form of development and marketing to something that just isn’t very good when they could be focusing on the core parts of their business. But, then again, I’m not Ballmer.
Read More
Ramblings

Iran Elections ... or ... will the revolution be Twittered?

I’m sure many of you have been following what’s been happening in Iran, right? Or maybe you haven’t because, like often happens in international events, the American media has dropped the ball in the most epic of fashions. And I’m talking Ed Scissum (God bless him) fumbling deep in Bama territory to give Auburn the win in the ‘97 Iron Bowl dropping the ball. It’s been that bad.
Read More
PHP

Drama? In My Developer Community?

… it’s more likely than you think! And here I thought drama was isolated to fandom mailing lists and MySpace! I was not at php tek this year. I keep meaning to make it to that conference, but, let’s face it, the week before Memorial Day is a really lousy time to have a conference. I usually like to take that Friday off to make it a long weekend. I may finally make tek next year, though. But, even if I went, I don’t usually get invited to the cool parties. It’s really for the best, though. I usually end up drunk in a bar listening to good music rather than trying to discuss functions and benchmarking after having imbibed a large quantity of booze or making an ass out of myself by diving into bushes. Ask me about that some other time. Apparently, at php tek, at one of these “cool-people-only” parties (okay, it was apparently an after-hours panel), a bunch of people cooked up this idea of having a uniform PHP coding standards amomg their own projects with the goal of having them adopted as some type of official standard. Now, in and of itself, this sounds like a good idea. Most other languages have at least a suggested best practices (Sun’s coding conventions for Java or Apple’s for Cocoa come to mind) even if you don’t use them. Every job I’ve worked in has had some standard, even if I had to write it. Most of them were derived from the PEAR standard, including what we do at dealnews. But hey, variety is the spice of life, right? What’s the harm in another choice? Nothing. So we’ve established that the idea of havng a[nother] PHP coding standard is not necessarily bad. The problem, as with all things, is what happened next… Somehow, they managed to get a closed mailing list on php.net. Think about that for just a second. This group, composed of some guys from some projects with no official relation to PHP other than being users of it, somehow ended up with [email protected]. WTF? I would love to know how that happened.More to the point, this will cause conceptual confusion among new, and even existing users. When I first heard about this, my first thought was, hey, this is on PHP.net, right? It must have some kind of official recognition, right? Well, as far as I can tell, it doesn’t. It’s just … some guys. Put yourself in the shoes of a new PHP user, visiting PHP.net for all your manual needs. Oh, what’s this? Standards? Well, I better use those! It was a suspiciously closed action for such an open-source project. The original mailing list was a closed list until Rasmus himself opened it, and the members don’t exactly seem keen on welcoming any input from anyone outside their little clique.Some of the things being said by the “PHP Standards Group,” quite frankly, make me very suspicious of their motives. Things like “All of us are too busy, both with real jobs and our various projects, to fight the battles that come of trying to make this a completely open process where anyone with an email address can contribute” reek of self-aggrandizing nonsense. I’m sorry, but that’s bullshit. Plain and simple. And the fact that no one else in the group has stood up to say otherwise speaks volumes. There’s a phenomenon that I have seen occur on mailing list called implicit acceptance. If you don’t stand up and say otherwise, you are implicitly agreeing with the stated course of action. So, if anyone in this group disagrees with the stated opinions, guys, now’s the time to man up. If you’re going to have a mailing list on php.net, and call yourselves the “PHP Standards Group,” you need to welcome input from the PHP community - all of us - not just your group. Otherwise, you don’t need to be on php.net, and you don’t need to be calling yourselves the “PHP Standards Group.” It is overly focused on OO. I know a lot of people think that objects are the answer to everything. I have strong disagreements, but I will save those for a later post. But (kind of tying into my previous point) there are a _lot _of people using PHP in a strictly functional way or in a way that sanely mixes functional and object oriented programming. Any standard - if it’s going to be called a PHP Standard - needs to take all widespread uses of PHP into accout, and not just OO. Now, as I said before, I’m not a “cool person.” I don’t have CVS commit access. I don’t have thousands of followers on Twitter or a cool blog (no offense to my five regular readers - you guys rule and I’ll buy you a round sometime!). I’m just some guy who’s been writing PHP for the last nine years or so. So, while it appears this “group” probably won’t care what I have to say anwyays, here is my humble suggestion for a path forward.** **Figure out the semantics. **Notice that all this stuff we’re talking is appearances and semantics. Nobody is discussing the actual proposals (as they have been made) so far, just the actions of the people involved. What exactly is this project trying to accomplish? Are you trying to write a standard for your project(s), or are you trying to produce something useful for the community? If this is just for your project(s), move it off php.net, call it something else (“The Shared Standards Working Group” or some other such nonsense), and do whatever the hell you want. But if you’re going to call yourselves the “PHP Standards Group,” and have your project on PHP.net, you have to welcome input from the community, even if you ultimately discard it. The thing I don’t understand is why this group appears so afraid of public input? Okay, the signal-to-noise ratio can get pretty high sometimes, sure. But for every ten, hundred or five hundred bogus suggestions you get, you may get one really good one. One you might not have thought of yourself or no one in your tight little circle might have seen. And this is the true power of any open-source project. I would urge the “PHP Standards Group” to overcome their fear of public input and let us - the users - have an input in the community process. As always, this represents my own views only, and not those of my employer, the beer I’m drinking (Fat Tire Amber) or my cat.
Read More
Apache

PECL memcache and PHP on Mac OS X Leopard

Wow, has it really been that long since I’ve written here? I really need to do better. So tonight I ran into an interesting issue this evening in configuring PECL memcache to run on my Macintosh. To give you a bit of background, I use the built-in copy of Apache, but with PHP (current 5.2.8) compiled from source since the version in Leopard is old and I needed some things that it didn’t provice. After that was installed with no problems, I went to the ext/memcache-3.0.4 directory to compile memcache as so: phpize ./configure make make install Then added it to php.ini as an extension and restarted apache. But it didn’t work. The information returned from phpinfo() still indicated it had not been installed. So I checked the logs and found this little gem: PHP Warning:  PHP Startup: Unable to load dynamic library '/usr/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' - (null) in Unknown on line 0 Okay. WTF does that mean? While Googling around for an answer, I came across this page. According to it,it’s a strong indication that you’ve likely compiled against the wrong architecture! This is an indication that the shared extension is causing a segmentation fault. Fortunately, there is a solution - force configure to use the right architecture. make clean MACOSX_DEPLOYMENT_TARGET=10.5 CFLAGS="-arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch x86_64 -g -Os -pipe" CXXFLAGS="-arch x86_64 -g -Os -pipe" LDFLAGS="-arch x86_64 -bind_at_load" ./configure make make install Now restart apache. You should have working memcache!
Read More

2008

Security

Automatically Expiring Passwords

I do a little bit of work for a friend on the side every now and then. He has a small online store set up with a credit card processor to handle processing payments for his credit cards. Every so often if he hasn’t gotten orders in a few days, he gets a bit antsy and asks me to log in and check to be sure no orders have gotten through without him getting an alert. Dutifully, I do this, as it usually only takes me about 30 seconds to make sure everything is working - as it always is. However, a few months ago I tried to log into his virtual terminal account, I was treated to a ominous warning, informing me that my password had “expired” and asking me to enter my password again, as well as selecting a new password. I had never seen that before, so I checked to make sure I was logging into the right site and had not somehow managed to fall for a phishing attack. Sure enough, my password had “expired.” Hmm. This is lame. Maybe I’ll try to be smart with it and reenter the same password … nope. It’s smart. Can’t get around it. Because I had other things to do and this is already wasting my time, I concede defeat and create a new password for logging onto this site. Then, I do the unthinkable. Something that would make any security researcher and probably the “designer” of this system cringe in horror: I write the password down in a text file. Now anyone who manages to steal my laptop could potentially have access to this (of course, the file is encryped with the original password, though, so there is that). Fast forward a few months. Another e-mail, another log into the virtual merchant terminal to check its status, another “password expired” message. Ah hah! Maybe I can set it back to what it used to be. No dice. It remembers all my old passwords.  Every 45 days, I have to make and learn a new password or this website, which is a monumental pain since I only usually look at it about that often. I make another new password, and update my file. More of my time wasted. After 90 days with this processor, I have now had three passwords. Now, I know how to create an encrypted file. But think about the users. The people using this are not computer experts. They are small businesses. Let’s say Bob at Bob’s Sunglasses has this account. But Bob doesn’t want to spend all day logged into his merchant processor account. Bob has sunglasses to look at! So, he gives the login information to his secretary Susan and tells her to process and fill orders as they come in. After 45 days, Susan gets a warning message one morning about changing her password. After spending an hour on the phone with tech support, she is able to figure out how to change the password. Then, she does exactly what I did: she writes it down. Only she writes it down on a yellow post-it note along with the user name and account number (“just in case,” she says to herself) and sticks it right on the side of the monitor for everyone to see. Automatically expiring passwords, from a security perspective, is an extremely bad idea because it encourages unsafe behavior with passwords. While theoretically it sounds like a great idea, it perversely encourages users to write passwords down - the last thing you want them to be doing - and just makes it all the more difficult for them to use your product. A better approach is to encourage or require users to have secure passwords in the first place, and to foster proper care for passwords.
Read More
PHP

PHP 4 End of Life

For many, today marks the beginning of the Olympics. All eyes are on Beijing. But, 08/08/2008 also has some significance that many, unless they are web application developers, may miss. Today is the official End Of Life for PHP 4.
Read More
Ramblings

Digital Scrolls

I’ve written before about my curiosity as to whether or not the period we are living in will be well documented. So much of our lives are digital these days, and so much information has already been lost. I can look back at my own digital history and see how much information of mine has disappeared. I have this big box of floppies. There are like 200 floppies in this box. Yesterday, on a whim, I picked up a USB to 3.5” drive and started going through some of the old disks in this box. I want to get rid of them because I haven’t looked at them probably in 8 or 9 years. A lot of stuff had already degraded to the point of not being readable - these disks have moved with me many times and have not lived in an environment conducive to data preservation. Many simply refused to mount properly and a lot of what did was often riddled with data errors.
Read More
Conferences

Usability in Everyday Life

As software engineers (especially ones who work on forward-facing user interfaces), we are taught to think about usability. Many of us are not good at it - including me (though I’m making a conscious effort to get better about it and “think more like a user”). Large companies, on the whole, have mastered this because they can expend huge amounts of money on research and focus groups to study what people want and how they interact with their software. Apple is a master at this. And, this is why the GIMP is terrible to use when compared to Adobe Photoshop. Oh, sure, the program itself is perfectly capable, but the interface was clearly designed by an engineer and not a graphic designer. The other approach is, of course, to separate the engineers from the UI design people. In a company the size of Apple or Adobe, I’m sure this is probably what they do. But small to midsize companies simply can’t afford to do that and, even if they could, somewhere along the line some engineer has to interface with the front end code. But thinking about the “user experience” is not just related to programming - any industry that has to deal with people who are not native or fluent with that industry can benefit from trying to “think more like them.” The hotel I’m staying in for OSCON here in Portland, the Doubletree, is a good example of this. When you exit the elevator on the fifth floor, there is the standard sign that rooms 500-520 are to the right, and 521-541 are to the left. The room numbers are not on the doors - they are on small plaques next to each door. But, the plaques don’t uniformally face the hallway or face in a uniform direction - some face the way you are walking from the elevator and some, strangely, face the opposite direction so that they will never be seen unless someone is walking from the opposite direction as they would normally walk when looking for a room. Think about this for just a second. The time when those plaques are needed the most is when someone is first finding their room, and they will almost always be coming from the elevator. After that, you usually remember, generally, where it is. In order to see half of the signs on the floor, you have to turn around and look behind you as you are walking. To add to this, think about how you would normally look for a room in a hotel. Do you go all the way to the end of the hallway? No - you probably stop about 10-15 feet from the end if you determine that your room is not one of the remaining ones. So unless you are paying careful attention to the plaques on the wall, there is a chance that you will not ever see your room. This is the reason I spent ten minutes walking up and down the hall trying to find my room: it was at the very end of the hall with a plaque that was only visible if you were walking the opposite direction. Now, it’s not like this breaks my entire world. I found my room, put my stuff down, and went out for a beer. But when looked at through the lens of usability, which software engineers are very familiar with, it could certainly use improvement. I’m sure the design makes perfect sense to the building architect and to all the people who work in the hotel. But to a guest, it makes little sense and requires extra time spent looking for their room.
Read More
Apache

Search Engine Friendly URLs with mod_rewrite

By now, I’m sure we all know about search engine friendly (SEF) URLs - that is, URLs that are able to be traversed by a search spider. Spiders don’t like to see a bunch of stuff on the query string (file.html?blah=foo), but do like standard URL patterns like /file/foo.html. Not to mention that it’s a lot easier to read. But what happens when you need to do something more complicated - say, rewrite using different types of conditions with optional arguments? Say, for instance, I have a script that takes arguments like this: /file.php?id=1[&view=1] And I want to rewrite it to look like this /file/(id).html[&view=1] In this case, the view argument is optional and could relate to any number of unique cases, such as internal viewing or refcode tracking, for instance. Well, your first thought might be something like this: RewriteCond %{REQUEST_URI} ^\file\/\d+\.html [OR] RewriteCond %{REQUEST_URI} ^\/file\/\d+\.html(.*) RewriteRule ^\/file\/(\d+)\.html(.*) /file.php?id=$1&$2 [L]` But it doesn’t work. This is because the query string isn’t part of the URI available for the rule to match. But, mod_rewrite, being the cool Swiss Army knife it is, lets you get around this by back referencing to the condition. Using the % operator instead of the $ allows you to reference parentesized expressions in the condition, like so: RewriteCond %{REQUEST_URI} ^\/file\/\d+\.html RewriteCond %{QUERY_STRING} (.+) RewriteRule ^\/file\/(\d+)\.html?(.*) /file/file.php?id=$1&%1 [L] RewriteCond %{REQUEST_URI} ^\/file\/\d+\.html RewriteRule ^\/file\/(\d+)\.html /file/file.php?id=$1 [L]` It’s described here in the docs. I thought this was a pretty cool solution to a problem that had been vexing me.
Read More
Linux

Diffing files via FTP

I ran into a situation today where I needed to diff files on a remote server against the ones on a local server when the only connection method I had to connect to the remote server was FTP. I wrote a little quick and dirty script to diff files over FTP. It’s stupid simple - it downloads the file and runs diff on it against a local file, outputting the result. It’s great for finding changes on a webhost that cripples real developers by only offering FTP. It’s also a great companion to ftpsync, which apes some of the functionality of rsync, again on crippled webhosts. The command format is: ftpdiff <local file> <username:password@host:/path/to/file>
Read More
PHP

Security on Shared Hosts

Shared hosts are a reality for many small businesses or businesses that aren’t oriented around moving massive amounts of data. This is a given - we can’t all afford racks full of dedicated servers. With that in mind, I would urge people to be more careful about what they do on shared hosting accounts. You should assume that anything you do is being watched. Take, for example, the /tmp directory. I was doing some work for a friend this weekend whose account is housed on the servers of a certain very large hosting company. While tweaking some of his scripts, I noticed via phpinfo() that sessions were file-based and were being stored in /tmp. This made me curious as to whether any of that session data could possibly be available for public viewing. My first move was to simply try FTP’ing up and CD’ing to /tmp directory. No go - they have the FTP accounts chrooted into a jail, so the obvious door is closed. However, the accounts have PHP installed, so I can do something like this in a PHP script: <?php system("ls -al /tmp"); ?> With this little bit of code, I can look into the tmp directory even if my FTP login is chrooted. Fortunately, sessions on this host are 600, so they’re not publically readable -  this was my primary concern and the reason I took some time to check this out. But people are putting lots of things into the tmp directory with the misguided idea that it is their private temporary file dump, including one idiot who put a month’s worth of PayPal transaction data into tmp and left it 644 so that it was publically viewable. Now, I’m a nice guy and the only thing I’m going to do with this information is laugh at it. But keeping in mind how dirt cheap hosting accounts are, there’s not a high entry barrier for someone with fewer scruples. The key thing to remember is that, if you need temporary file storage on a shared host, do it someplace less obvious, set the permissions so that only you can read/write to it (600), and clean up by deleting files as soon as you possibly can.
Read More
Facebook

Facebook Errors

My Facebook news feed hasn’t update since May 15th - a span of four days, in which I know many of my friends have posted or at the very least updated their status. With 50-something friends, I know for sure some of my friends are updating - my feed just isn’t reflecting it. So, after Googling about (Facebook’s site, for the record, is extremely unclear about contacting the company and/or reporting bugs), I found this: Great! A place to file a report. So I type in my report and submit … D’oh. Apparently, I’m not the only one having this issue, either. C’mon guys, get it together! At least let us users know what’s going on.
Read More
Apple

Installing PECL PS on Mac OS X

The PHP that comes standard with Mac OS X Leopard doesn’t come with the PECL PS extension. PECL PS requires pslib, and the last version I verified to work the PS extension was 0.2.6 (I still have an outstanding bug for that). There’s a minor little bug that prevents it from compiling on OS X, so here are the steps necessary to get PECL PS working on Leopard: Download PSLib 0.2.6. Unpack to somewhere on your filesystem (I use /usr/src) cd pslib-0.2.6/src Apply this patch to pslib.c (patch pslib.c leopard_pslib-0.2.6.patch) cd ../ ./configure make make install By default this puts it in /usr/local/lib. Now install the PS extension using PECL. pecl install ps When it asks for path to pslib installation, /usr/local/lib Once it’s done compiling, add the .so to your php.ini. You may have to move the .so or alter extension_dir in your php.ini. sudo apachectl restart
Read More
Linux

ngrep and memcache

You can use the Linux command ngrep to “watch” what is going into and coming out of memcache. ngrep is an amazingly useful tool for troubleshooting a wide array of network issues; I previously have used it extensively for troubleshooting SIP errors. In this case, I’m using it to be sure memache sessions in PHP are actually working. codelemur ~ # ngrep -d lo port 11211 interface: lo (127.0.0.0/255.0.0.0) filter: (ip) and ( port 11211 ) #### T 127.0.0.1:60912 -> 127.0.0.1:11211 [AP] get a804f5517468d4696c60da7eaf8a7179.. ## T 127.0.0.1:11211 -> 127.0.0.1:60912 [AP] VALUE a804f5517468d4696c60da7eaf8a7179 0 16..test|s:4:"test";..END.. ## T 127.0.0.1:60912 -> 127.0.0.1:11211 [AP] set a804f5517468d4696c60da7eaf8a7179 0 1440 16..test|s:4:"test";.. # T 127.0.0.1:11211 -> 127.0.0.1:60912 [AP] STORED.. It doesn’t help too much if you have multiple memcache servers (which is kinda the point of memcache), and since it’s raw data you can’t inspect the packets if they’re compressed, but in a testing environment, it’s a great way to be sure all things are kosher.
Read More
Linux

Ubuntu 8.04: My Thoughts

Every so often I get the urge to check out desktop Linux - just to see how things have progressed and whether or not it is in a usable state yet. For the last few times, the distro of choice I have tried has been Ubuntu, as that seems to be the new de facto starting point for a desktop distro. Before beginning this review, let me first say that desktop distros have come a long way over the last few years, and Ubuntu is by far the most usable of the ones I’ve seen. Ubuntu itself has come a long way and, for someone who is willing to compromise on some points, is quite usable for someone who’s willing to spend some time tweaking things. Having said that, it still has a ways to go before reaching Windows. And it’s not even in the same league as Mac OS X. First, a little about my test rig: An AMD Athlon64 3700+ with 2 gigabytes of memory, two 250gb SATA hard drives (one for Windows, one for whatever OS I’m testing at the time), and dual GeForce 7600 GS’s running three 19” Samsung LCDs. Not your standard setup, mind you, but not ultra advanced and bleeding edge, either. The installation: The installation is much the same as previous releases of Ubuntu: load up the live CD and, from within the live environment, launch the installer. The installer itself asks fewer questions that the Windows XP installer, yet seems to be able to do more. And doesn’t require endless reboots to get everything working. My installation proceeded mostly okay (being that Windows resides on sda, I installed Ubuntu in sdb), except that after I installed and rebooted … nothing. It kept booting into Windows. I reinstalled again just to be sure I didn’t blitz through the boot record screen, but sure enough, writing to the MBR on sda doesn’t work when you have two SATA drives and you’re installing Ubuntu on sdb. This has been a bug for at least the last two times I’ve tried to install Ubuntu. I can fix it with grub commands and properly write a boot record to sda, but for the purposes of testing (and because I’m lazy and wanted to play with it) I just plugged sdb directly in and removed sda. So I’m up and running. This is something that would befuddle a lot of folks, but to be fair I’ve had problems with Windows in the past, but it seems like it would be an easy fix. So I have Ubuntu installed now. Yay. Next step is to get my three LCDs working. This is where we run into what I think is the biggest hinderance to desktop Linux: X. If I plug three monitors into two video cards on a Mac, it’s going to turn on all three monitors and allow me to drag things between them all effortlessly (one big desktop). If I plug it into Windows, I’ll need to download the drivers, but after that, no problems. Not so in X, though in fairness it is likely more due to the intrangisence of Nvidia when it comes to providing open source support. First, if you want to do anthing, you have to download a “Restricted” driver. This is Ubuntu-speak for “we didn’t want to compromise our oh-so-precious ‘free’ principles in the name of usability” (in case you can’t tell, I have very little patience for zealotry). In Ubuntu 8.04, the Restricted Drivers Manager has been poorly renamed to Hardware Drivers. Doesn’t make a lot of sense, since a driver for hardware may or may not be restricted. So, I download and install the Nvidia drivers. Next, fire up the nvidia-settings utility to fix the X config. I was running this from the shell, but I later discovered that it puts a nice menu item in the Administration for you. It sees all my cards and, using this, I am able to configure everything up. You have multiple options for ways to do three monitors, but only one works: Xinerama. You could do three separate X screens, but you can’t move windows between them. You could do Twinview on one screen and a separate X screen but, again, you couldn’t move windows between a dual screen and the third monitor, the windows on the Twinview screen don’t maximize and minimize properly, and the login screen is right in the middle of the two monitors so that it’s very difficult to see what you’re tying when you login. Only Xinerama lets you move windows between the three monitors, allows them to maximize properly, and has the login on a single screen. This was about an hour of changing settings and restarting X before I got it right. The downside? It still isn’t supported in Compiz, which is a real bummer becauase compositing window managers was one of the things I was really looking forward to using. Anybody know if Compiz accepts bounties, because I really want this feature? So no Compiz. Oh well. Next, get my other hardware working. I have a Logitech MX1000 Laser (greatest mouse ever, by the way), and I like to map the buttons to do various things (most notably, I use the “cruise” buttons to go back and fourth on web pages). In order to get this to work: sudo apt-get install xserver-xorg-input-evdev cat /proc/bus/input/devices (find Logitech USB Receiver) sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf.bak sudo gedit /etc/X11/xorg.conf Changes: Section "InputDevice" Identifier "Configured Mouse" Driver "evdev" Option "CorePointer" Option "Name" "Logitech USB Receiver" #this should be the name of the device which I made bold here. EndSection sudo apt-get install xvkbd xbindkeys gedit ~/.xbindkeysrc Changes: /usr/bin/xvkbd -xsendevent -text "\[Alt_L]\[Left]" m:0x0 + b:12 /usr/bin/xvkbd -xsendevent -text "\[Alt_L]\[Right]" m:0x0 + b:11 After restarting (yes, again) I have working buttons. Yay. The volume control on my Microsoft Natural Egro 4000 works now. It seems like this required some hacking last time around. Yay. Now to install some developer tools so I can get to work. I love Synaptic; I wish Mac OS X had real package management the way Linux does - it’s one of the things Linux really has going for it, though I generally prefer Gentoo’s portage manager. So I install Eclipse. Huge package, and I was getting really crappy download speeds, so I let it run all night and went to bed. The next day found Eclipse installed and ready to go. Installed PHP, SVN, Apache. So I now have the tools to work. My conclusions: I like Linux. I really do. I want to see Linux succeed on the desktop. And Ubuntu has gone further, faster than any other Linux distro. It is now by far the most fit and ready to use of any desktop Linux distro. I have a usable system now, and, theoretically, there is nothing stopping me from using my machine for most of my daily work. Having said that, there is a lot to be said for style. First of all, it’s ugly as sin. The Gnome UI, while it is much improved, is still terrible when compared to Windows and OS X. Also, who thought that brown was a good color for a UI? Second, the names of some of the tools are un-intuitive: “Hardware Drivers,” “SCIM Input Method Detection,” “Authorizations,” and others need to have more intuitive names, and once you use any of them, the layout is not really intuitive either. The initial screen layout with a menu at the top and a taskbar at the bottom is also not really all that usable, though it can be corrected by removing the top panel. I’m using it now (typing this in Drivel) so it is usable, but it still can’t displace my Mac for ease of use.
Read More
Conferences

Live from OSCON

I will again be attending OSCON this year in Portland, Oregon on July 21st - 25th. Come and say hello!
Read More
Apple

Automatically Joining a Group Chat with Adium

At dealnews, we have an internal Jabber server that we use for our internal communications. As part of that, we have a number of internal chat rooms for the various areas of the company. I’m a big believer in automation - that is, scripting various repetitive actions that I have to do every so often. One of these little things is joining our developer chat channel each morning when I get to the office. Unfortunately, there’s no built in way in Adium to do this, nor does Adium expose native AppleScript commands to join group chat. It does for other functions, but group chat functionality is conspiciously absent, even though there’s a long standing feature request to implement this. So, we have to hack it. In this case, I used AppleScript to imitate keyboard input set CR to ASCII character of 13 tell application "System Events" tell application "Adium" to activate keystroke "j" using {command down, shift down} keystroke "development" keystroke CR end tell So we have a script, but how to automate the launching of it? I mentioned MarcoPolo before. It has quickly become one of my favorite pieces of Mac software. In this case, I use MarcoPolo to launch the AppleScript (with a 10 second delay to allow time for Adium to start and connect to the Jabber service). You can launch AppleScripts using the osastart utility like so: /usr/bin/osastart /Users/codelemur/Scripts/DevChat_AutoJoin.scpt It sucks that it’s like this, and I wish they would expose a more native way to do this, but it does work.
Read More
Linux

Gentoo

I’ve been a happy Gentoo user for the last few years. There’s so much to like about it: built from source with only what you need and Portage beats the pants off RPM, among many other reasons. But lately, I’ve been getting a little annoyed with it. My annoyance has to do with the releases … or lack thereof. And, the communication about said “delays” … or lack thereof. There used to be four Gentoo releases a year. A few years ago, they went to two releases a year. Last year, they completely skipped 2007.1 release. Now, we’re three months into 2008 and the 2008.0 release, which was supposed to be released to the public as stable on March 17th, hasn’t even been seeded to mirrors for public beta yet. 2007.0 is still the official stable release of Gentoo - a release that is more than a year old at this point. This wouldn’t be a big deal if I didn’t really need an updated live CD to do installs with. I have new machines with an onboard SATA controller that isn’t supported by the kernel in the 2007.0 release but is supported by the 2.6.23 kernel which was in the Gentoo sources at the time. I was at an impasse, unable to install Gentoo on my equipment until I got around it by compiling my own updated kernel and rolling my own live CD. But, I wouldn’t have had to do that if the Gentoo release team could at least come close to hitting their release schedule. I’m not asking for the universe - just get within the same month as the schedule says and we’ll call it good. There’s also been disturbingly little communication about the reasoning behind these “delays.” There was one post to the site about the 2007.1 release being cancelled. There’s been no communication on the site whatsoever about the delay with 2008.0. The things on the front page right now talk about the monthly newsletter and some new trustees of the Gentoo foundation. I know it’s free software and I shouldn’t complain, but for those of us who make our living using Gentoo, it’s a bit annoying to say the least. You won’t need trustees of a foundation if there’s no foundation … because everyone goes somewhere else because the distro is updated less often than a phone book comes out.
Read More

Cybersquatting Annoyance

I’m getting ready to launch a new open source project, and, as everyone knows, you can’t do that without a cool sounding name. :P I’ve picked out about six cool sounding names, and I’ve been looking them up on GoDaddy to see if I could go ahead a register the domain name. And wouldn’t you know, all of them are already taken. Now, this wouldn’t irritate me so much if there was actual content on the sites. But every single one I looked up is squatted by link farms. I am literally 0-6 right now. Girls are like internet domain names, the ones I like are already taken. well, you can stil get one from a strange country :-P - [bash.org](http://www.bash.org/?369)
Read More
Music

Matthew Ebel

This weekend in Atlanta, I had the chance to hear an extrodinarily talented musician. I want to give him major props for one of the best concerts I have seen in a long time. Matthew Ebel (you can buy/listen to his stuff on iTunes too) has a sound that is somewhere between Ben Folds and Billy Joel. If you like Piano Rock, or are just looking for something good to listen to, I highly suggest you check him out. I already bought all three of his albums.
Read More
Linux

PHP, PostScript and ATM Fonts

Recently, I’ve been expermenting with PHP’s PS functions - the PECL extension that allows you to directly output PostScript from your scripts. There are other projects that come to mind (html2ps is another one that will render to PostScript) but I wanted somsething more tightly intergrated into my script. Mysteriously, when I went to install my scripts on the new Poweredge I bought, I began to get there strange errrors: ps_findfont() [function.ps-findfont]: PSlib warning: Trying to insert the glyph '.notdef' which already exists. Please check your afm file for duplicate glyph names. I couldn’t understand what was going on - it was working fine on the previous server. After googling about the web and wracking my brains for about two hours, I checked the versions of PSlib  installed on the two servers. Both were masked by Gentoo’s Portage system, but the unmasked version on the previous server was 0.2.6, whereas the one on the new server was 0.4.1. After I masked out 0.4.1 (thanks to Gentoo’s awesome package.mask) and downgraded back to 0.2.6, everything began working again. So there you have it. Apparently the PECL PS extension is not completely compatible with the most recent version of PSlib, and downgrading back seems to work. Hope this helps somebody!
Read More
DIY

DIY 19-inch Rolling Rack

After my debacle with the 1U servers I bought (see my previous post), I went by a local technology recycling center and picked up a couple of off-lease Dell Poweredge 1750s. It’s what I should have done in the first place. Anyways, I decided a few weeks ago that I wanted to mount these servers in a rack. I wanted it to be mobile and easy to move as moving is something I have been very familiar with over the last few years. After not finding what I wanted anywhere, I was able to find rack rail at  zZounds (a music store that I’ve ordered guitar stuff from before). So I decided to do it myself. The first step was to understand the measurements of a 19” rack. Originally designed to hold railroad signal switching relays, 19” rack measurements are specified by EIA-310-D. The strips from RaXXess are standard rack rail at 0.625” in with. They are mounted at 19” apart from the outside of the rails, giving a distance between the inside edges of the rails of 17.75”. The depth isn’t specified, so I decided to make mine 30” deep. After that, it’s just cutting! It took 4 2x4’s at 10ft and a sheet of plywood. The pictures below will explain better than I can in words the process of building this thing. The first step was measuring and cutting. This was actually the most tedious part of doing this whole thing was getting the measurements right - as Dad always said, measure twice, cut once! I cut four 24.5” pieces, four 22” pieces, and four 36” pieces. Here’s the completed frame 1, with a Dell Poweredge 1750 in to test and be sure that I had the measurements right. The rails had been mounted on the inside towards the back of the frame to give the server face some protection. Closeup of the server in the frame. Adding the top and bottom pieces now. Mostly complete. By now you can see what I’m aiming for. And here’s the finished product! I added plywood sides and casters to roll it around. The total cost was about $100. The most expensive items were the rails (which came in at about $50 shipped) and the casters (which were $20 for four locally from Harbor Freight). After that, it took me about four hours cut and put everything together. It’s not quite finished yet - I want to add doors to the front and back to ease transport a little bit as well as handles on the sides to make it easier to lift in and out of a truck. I haven’t put the servers in it yet - I’m waiting for rails to come in for the servers since they didn’t have any where I bought them from. I’m also thinking about slaping a coat of paint on it to make it look a bit better. Otherwise, it’s pretty sweet!
Read More
Ramblings

Angry Rob is Angry

… or, beware of deals that look too good to be true. In my professional career, I have now found only two things that have a 100% failure rate. The first was a batch of Digium TDM-400P FXO/FXS card. Every single one we deployed from that batch at my previous employer failed. I hear they don’t have those problems anymore - using a different fab shop now, I guess. But I still don’t like that card for that specific reason. The second 100% failure rate came just this evening. The culprit is this little POS: Dual Xeon 2.4GHz 2GB ECC 120GB 1U Rack Mount Server being sold by Geeks.com. Look, it’s a 1U for $375. I’m not expecting the universe out of these things. With that in mind, let me document the last two days of my life. I ordered two of these little guys about a week ago, and they arrived on Tuesday. I intended to turn one into a general purpose test and development box, and one was going to go to Atlanta to replace the 1U Celeron in my friends’ data center. So I get the machines home, unpack them and try to boot. The first one won’t POST. No beep, no video, just a bright orange surrender HD light. Research tells me that the motherboard is fried. The other one booted up fine. I figured I was just unlucky, so I RMA’d the first one today and was going to put the OS on this one. Well, the OS install went fine but when it came time to reboot … presto. The exact same thing as the first. No video, no beep, orange HD light. Of two machines ordered, both of them failed within 48 hours and both in the exact same way. So now I’m out at least $60 in RMA shipping charges - and I have no servers - just because this company apparently has no Q.A. So take my experience as an example of what not to do when ordering a server. A good deal can turn into a major headache incredibly fast. Me? I’m ordering Dells from now on.
Read More
Apple

Something In The Air

… or maybe the water. Unless you were living under an Internet rock, you likely know that today was Keynote Tuesday. That is, the day Apple CEO Steve Jobs tells us loyal apple fanbois what we will be spending our money on this year. The star of this year’s show was the Macbook Air, a thin, light laptop designed to fit somewhere inbetween the Macbook and the Macbook Pro. At first I was wow’d by the Air. Jobs, as always, is the consummate showman and I will admit that I bought into the reality distortion field for a little bit. Then the “air” cleared and I began to think about what the Macbook Air really is. So let’s take a look at the Macbook Air and where it fits. Maximum thickness of 0.76”. The Macbook is a quarter inch or so higher at 1.08”. Weight of 3 lbs. The Macbook, a slightly heavier 5 lbs. Battery life is slightly longer at 5 hours. The Macbooks average between 3-4 in my experience. However, the battery is not removable, whereas I could carry several Macbook batteries with me. For $1200 more, you can get a solid state drive. 2GB of memory, and only 2GB of memory. The Macbook comes in at 1GB standard, but can be upgraded to 4GB. In my opinion, these are the areas where the Air wins. Now, let’s look at where it loses. 1.6ghz / 1.8ghz Core 2 Duo. The Macbook slides in at betwen 2.0 and 2.2 ghz. Storage is an 80GB 4200rpm PATA drive, whereas the Macbook boasts an 80GB 5400rpm SATA drive. Granted you can get a 64gb SSD drive with the Air, but for $1200 I can’t believe that anyone other than the biggest fanboi will be buying those for that price. The Macbook can be upgraded to as much as 4GB of memory. The Air is stuck at 2GB, and since it’s sodered onto the board, it’s stuck there forever. 1 USB plug? No onboard Ethernet or FireWire? No mic plug? No optical drive. Granted, you can buy an external drive, and you can use that boot from another computer thing, but that doesn’t help you if you have no other computer. Now, Brian Moon often tells me that I don’t think from the point of view of an average user because I’m not an average user. While it’s true that I’m not your average user (as a computing professional, I have needs generally beyond most consumer computing gear), I like to think that I can look at all choices and choose the best one. In this case I just can’t understand where this product is being targeted. I just don’t understand how anyone could want to trade off all the features you get with the regular ol’ Macbook for what is essentially a small gain in dimensions and weight, and the “wow!” factor, especially when all those added features on the Macbook come in at $300 less for the top-end Macbook model. At that price, you could upgrade the memory and buy an extra battery and still come in less than the base price of the Macbook Air, with the only tradeoff being that it’s 0.32” thicker and 2lbs heavier. I can’t believe that any informed consumer is going to choose a feature poor Macbook Air when the standard Macbook, at between $300 and $750 less, is just so obviously a better deal. Brian Tiemann said it best: “a ridiculously overpriced, feature-poor, and generally useless pig of an idea.” Also, I wonder if Steve Jobs knew Randy Newman was going to go all Michael Moore on everyone. Someone please be sure he never sees a microphone again!
Read More
Apple

Four Free Mac Apps I Can't Live Without

I know top X lists are almost passe at this point, but that’s not going to stop me from giving a shout-out to some of the applications that daily make my life easier: MarcoPolo MarcoPolo is a neat little application that is capable of executing actions based on a set of rules. That is, if something on the system changes (such as an IP address, power status, USB or even the light level), it can execute a series of commands (such as mounting network drives, setting the screensaver, changing the default printer, etc). It can even run arbitrary shell scripts! Why this is useful to me: At dealnews, we (the dev team) all use MacBook Pros for our development work and constantly alternate between home and office. Whenever I arrive at work in the morning, the minute I plug my MacBook into the network, MarcoPolo senses that the IP address has changed from my home and changes the default printer, mounts some network shares, adjusts the screensaver settings, and runs a few other custom shell scripts I have to set up my environment. All without having to do a single thing. When I get home, it executes still more commands to change to a remote development environment. Completely effortless. XMeeting XMeeting is a SIP softphone (and videoconferencing application, but I’ve never used the video features) that allows you to connect to a SIP server and place calls using your laptop. Why this is useful to me: At dealnews, we run Asterisk as our phone system (see my earlier posts on Asterisk). One of the many nice features of Asterisk is its standards compatibility - that is, you can use anything that can talk SIP with Asterisk. Since CounterPath has apparently decided that Leopard compatibility for their free softphone (X-Lite) is not a priority, XMeeting comes to the rescue. As a bonus, it actually acts like a Mac application and doesn’t do the stupid things that X-Lite did (like messing with the system volume). Quicksilver Quicksilver is the single application I cannot live without. On a Mac without it I am almost lost. More than just a launcher, it is a tool to help you work more efficiently. You can press Ctrl+Space and type what you want and Quicksilver will launch what you need. That’s a horrible description for how cool this app is. **Why this is useful to me: **Without Quicksilver, I am lost. It makes it literally so fast to move around your Mac without taking you hands off the keyboard. A quick hit of Ctrl+Space gives you the ability to launch programs, open files, navigate contacts and send emails, and make quick notes among many othe things that this program can do. It is essential to my everyday life as a Mac user. DejaMenu DejaMenu is a neat little program that will display the current application’s main menu as a popup menu where the mouse is whenever a key combination is pressed. **Why this is useful to me: **I use my MacBook Pro with a second monitor when I’m at the office. One of the things that has infuriated me for awhile as a Mac user with multiple monitors is the inability to have the top menu bar either on each monitor respresenting the application on that monitor, or the ability to have it move with whatever monitor the mouse is on. It’s irritating to have to go back to the main monitor when the application is running on a different one. DejaMenu allows you to pop the application menu wherever your mouse is, which makes things a little easier. Additionally, I mapped the key combination to a button on my Logitech MX-1000 to make things even easier.
Read More
News

Announcement

While in general I only use this blog for discussing programming, computers and my life as an engineer in dot-com, it’s only natural that, every now and then, a personal post will slip in. And this is as good a reason as any. On December 26th, I asked my girlfriend of almost three years to marry me. She said yes!
Read More

2007

Microsoft

Benchmarking Vista and XP: Apples and Oranges?

This article posted to C Net got me to thinking. In the article, they talk about vaguely defined “benchmarks” showing that Windows XP with the beta of Service Pack 3 outperformed Windows Vista with Service Pack 1. I can only say one thing: duh. Quite frankly, I would have been more surprised if Vista had outperformed XP. This really is an apples and oranges comparison because Vista is a newer and more complex operating system. And I’m not exactly a Microsoft fanboy, either - I’m typing this on a Mac, using a Java journal client. Of course it is going to run slower on the same equipment than an operating system that was released six years ago. I’m sure Windows 98SE will beat the pants off of XP on the same equipment, too. Leopard, released a few months ago, won’t even run on hardware circa when OS X first came out and will almost certainly run slower on machines that were top of the line when Tiger was released. Hey, while we’re at it, we could compare Doom to UT3 to see which runs faster! If they wanted to do a more fair comparison, they would have compared them on different machines - top of the line machines when their respective operating systems were released, using adjusted benchmarks. Being that machines are much faster now than they were in 2001, I wager that the difference between them would be a lot less.
Read More
Apple

Set Leopard's Menu Bar Back To White

There’s been a good bit of debate about Leopard’s new translucent menu bar. For me, it doesn’t cause many issues. However, some of my coworkers despise it and, to be fair, I can see the arguments that many of the people who dislike it have: it doesn’t add anything to the OS and actually makes it more difficult to read the text. Well, here’s a litle tweak that will set the menu bar back to a white background. In the terminal, you can use the following command to change the default appearance of the bar: sudo defaults write /System/Library/LaunchDaemons/com.appledowServer 'EnvironmentVariables' -dict 'CI_NO_BACKGROUND_IMAGE' 1 Restart your Mac, and voila! White menu bar! Changed your mind? Set it back: sudo defaults delete /System/Library/LaunchDaemons/com.apple.WindowServer 'EnvironmentVariables' -dict 'CI_NO_BACKGROUND_IMAGE' Restart your Mac and your menu bar is back to being translucent.
Read More
Blogroll

PHP/MySQL in Huntsville/North Alabama

Just to let everyone know, we’re trying to get a little meetup.com group going for those  developers interested in PHP and MySQL in the Huntsville and Northern Alabama regions. I’ll be attending, and I know Brian will be giving mini-talks for the first few meetings. You can visit Brian Moon’s blog for more information.
Read More
Python

Controlling iTunes with Python ... Cross Platform

So it’s been awhile since I’ve written. In that time, my girlfriend has moved in here with me in Huntsville and, as always, dealnews has kept me very busy. However, it has not prevented me from occasionally trying my hand at something new. A week or so ago I decided that I was going to learn Python. However, as part of my nature, I simply can’t “learn” a language without having a purpose. For instance, I have never been able to simply read a book on programming - I needed a reason. So I’ve been giving myself reasons to do little tasks here and there in Python. One of them came to me just today. I have recently moved all of my development at dealnews from the PC to a Macbook. I’ve never been an OS-bigot - always use the right tool for the job, and the Mac - which in many ways is just Unix with pretty make-up - is the perfect platform. However, I still use many of the peripherals I purchased for my PC, including my Microsoft Natural Egronomic Keyboard that I adore. At home, I still use a PC (until I can afford a new Mac Pro), albeit with the same keyboard. One of the things I really love about the keyboard is that it has various buttons that are just … buttons. They can be mapped to do anything you want them to. There are five multi-function buttons at the top that can be mapped to run programs. So I’m sitting here thinking, “self” (because that is what I call myself), “why not write a little program to run on the click of that button and go to the next or previous track in iTunes, so that changing the music doesn’t involve any more effort out of my busy programming day than hitting an additional keystroke”. But, it must work both at home and at work, meaning that it must run in Windows and Mac. Enter Python I knew from previous experimenting in .NET that iTunes exposes a COM object on Windows. With that in mind, I quickly found this page that described almost exactly what I wanted to do in Windows. So that left the Macintosh. After an hour or so of digging on Apple’s website, I found this page that described how to access the COM on the Mac - and wouldn’t you know, the functions are slightly different. After that, it was pretty easy: import sys from optparse import OptionParser platform = sys.platform if platform == "win32": import win32com.client iTunes = win32com.client.gencache.EnsureDispatch("iTunes.Application") if platform == "darwin": from Foundation import * from ScriptingBridge import * iTunes = SBApplication.applicationWithBundleIdentifier_("com.apple.iTunes") def previousTrack(): if platform == "win32": iTunes.PreviousTrack() if platform == "darwin": iTunes.previousTrack() def nextTrack(): if platform == "win32": iTunes.NextTrack() if platform == "darwin": iTunes.nextTrack() def main(): parser = OptionParser() parser.add_option("-n", "--next-track", action="store_true", dest="next") parser.add_option("-p", "--prev-track", action="store_true", dest="prev") (options, args) = parser.parse_args() if options.next == True: nextTrack() if options.prev == True: previousTrack() if __name__ == "__main__": main() So yeah. It’s kind of code monkeyed together, but not bad for someone who’s only been doing Python for a week in the evenings. Passing either a -n or -p to the script causes it to command iTunes to go forward or back. Of note, to work on Windows, it does need the COM components from the Python for Windows extensions. I’m gonna expand this script some more in the future, but for now it does what I need.
Read More
PHP

PHP Templating Celebrity Deathmatch!

Ladies and Gentleman! Welcome to the PHP Templating Celebrity Deathmatch! I actually do like the idea behind templating. I know there are varying arguments about whether or not templating is appropriate for PHP, though those are not the focus of this entry. The big idea behind templating is separation of concerns, that is, breaking a program into parts that are easily manageable and don’t overlap in functionality. In an ideal world, templating would provide the added advantage of allowing a programmer to be a programmer and not a web designer - and allowing a web designer to be a web designer and not a programmer - by keeping the logic underlying the presentation layer to a minimum. However, I have never found this to be true in any project I’ve worked on in my professional career. One of the big benefits, as far as I see, is that it makes code much easier to read. This may not be true for everyone, but I would much rather be confronted with smooth, separated templated code rather than a jumbled PHP mess. It’s easier to read and far, far easier to adapt and change. While I was attending OSCON a few weeks ago, I heard mention of a new PHP templating engine that was written in C and native compiled into a PHP extension. This would make it much, much faster than anything written in PHP itself - in theory. This project, called Blitz, was making some pretty grand claims on their website, so I wanted to put them to the test - at least a small timing test. In this test, I am going to be comparing Smarty (the most widely used PHP templating engine and an official PHP project), Blitz (a new templating engine currently under very active development that is native compiled as a PHP extension), and standard PHP includes. For the purposes of this test, I wrote a quick timing function that uses microtime() to record how much time has elapsed between each call of mark_time(). The code is available in the accompanying project. A Note About The Tests These are not meant to be exhastive tests by any means. These tests are just designed to give you 5,000 foot overview of the current state of PHP templating. They only evaluate page generation time and not other metrics such as CPU load, IO load, or memory usage. Furthermore, I selected three scenarios that I have commonly used in templating; there may be some scenarios that I haven’t tested where one method may outperform others. And, as with any benchmarking, they are dependent on my system - YMMV. Test 1: Instantiation This is a simple test that determines how much time it takes to power up the templating engine and get it loaded into memory for PHP to use. For the purposes of this test, we will just be comparing Smarty and Blitz, as there is no need for instantiation with a standard PHP include. We’ll start with Smarty first. smarty_instantiation.php <?php echo mark_time()."<br>"; include "Smarty.class.php"; $smarty = new Smarty; echo mark_time()."<br>"; ?> Smarty’s instantiation time was 0.0058109760284424 or 0.005 seconds in human terms. blitz_instantiation.php: <?php echo mark_time()."<br>"; $blitz = new Blitz; echo mark_time()."<br>"; ?> Blitz’s instantiation time was 3.0994415283203E-5, or 0.00003 seconds in human terms. It may not seem like a big difference, but this is one area where having Blitz as a PHP extension makes a huge difference over Smarty being written in PHP and included. Because PHP must traverse the include_path to find Smarty.class before including it, it causes PHP to be slowed down before it can even instantate the Smarty object. To be fair, I decided to run a second test again with the include out of the timing mark. smarty_instantiation2.php <?php echo mark_time()."<br>"; $smarty = new Smarty; echo mark_time()."<br>"; ?> Even without having to search the include_path for Smarty, it still took 6.5088272094727E-5, or 0.00007 seconds to instantate the Smarty object - almost twice as long as it took to instantate the Blitz object. However, this is not a realistic scenario in any way - there is no way that PHP can have saved any time and still have access to the Smarty object! Winner: Blitz Test 2: Simple Template Rendering In this test, we will be comparing simple template rendering in Blitz, Smarty and PHP includes. In this test we will create a simple HTML template with two variables that need to be replaced, then render and display them to the user using each engine or, in the case of PHP, straight PHP. So, let’s get started! We’ll run Blitz first, since it won the previous test. blitz_simple_render.php <?php echo mark_time()."<br>"; $blitz = new Blitz('blitz_simple_render.tpl'); echo $blitz->parse(array( 'title' => "Blitz Test!", 'body' => "Blah foo! I'm a body!" )); echo mark_time()."<br>"; ?> Blitz took an impressive 0.00011801719665527, or 0.0001 to render a simple HTML document with two replaces. Smarty’s next: smarty_simple_render.php <?php echo mark_time()."<br>"; include "Smarty.class.php"; $smarty = new Smarty; $smarty->assign('title',"Smarty Test!"); $smarty->assign('body',"Blah foo! I'm a body!"); $smarty->display('smarty_simple_render.tpl'); echo mark_time()."<br>"; ?> Because Smarty is a compiling engine (it compiles the templates to PHP and caches them), the first run is always the most costly - in this case, an atrocious 0.058284997940063 or 0.06. Even on subsequent runs, 0.0065691471099854 or 0.007, again much slower than Blitz. Finally, standard PHP includes: php_simple_render.php <?php echo mark_time()."<br>"; $title="PHP Test!"; $body="Blah foo! I'm a body!"; include "php_simple_render.tpl.php"; echo mark_time()."<br>"; ?> Surprisingly, standard PHP includes took 0.00030016899108887, or 0.0003 seconds, much faster than Smarty, but three times as slow as Blitz. Once again, this likely has to do with PHP having to traverse the include_path before finding the appropriate file. If you specify the _absolute path on the filesystem _to the file above, the time took becomes 0.00010490417480469, or 0.0001, roughly equal to Blitz on any given run. However, because Blitz is able to parse the template with a minimum of fuss whereas I have to explicitly specify the filesystem path for PHP to get equal performance, this round also goes to Blitz. Winner: Blitz Test 3: Complex Templating In this case, we are going to be doing complex templating. This test includes three template-based includes, one foreach loop over an array, and a large array of generated data. Just for the curious, the generation of the data is not going to be counted towards the timing. In this case, we have generated a 10,000 item array and are going to have each engine iterate over it. blitz_complex_render.php <?php echo mark_time()."<br>"; $blitz = new Blitz('blitz_complex_render.tpl'); foreach($arr as $array) { $blitz->block('master_loop',array( 'id' => $array['id'], 'id1' => $array['id+1'] )); } echo $blitz->parse(array( 'title' => "Blitz Complex Render text" )); echo mark_time()."<br>"; ?> Blitz ran the test in 0.072134971618652, or 0.07 seconds, not too shabby considering it had to iterate over a 10,000 item multidimensional array. smarty_complex_render.php <?php echo mark_time()."<br>"; include "Smarty.class.php"; $smarty=new Smarty(); $smarty->assign('title',"Smarty Complex Render test"); $smarty->assign('arr',$arr); $smarty->display('smarty_complex_render.tpl'); echo mark_time()."<br>"; ?> Again, because Smarty is a compiling engine, the first run is always the most expensive - in this case, a whopping 0.31642484664917, or 0.3 seconds. Subsequent runs fell in the range of 0.099456838607788, or 0.1 seconds, three times as fast as the first run but still slower than Blitz. Finally, standard PHP includes: php_complex_render.php <?php echo mark_time()."<br>"; include "php_complex_render.tpl.php"; echo mark_time()."<br>"; ?> In this test, raw PHP includes came in at 0.055343866348267, or 0.06, the fastest of all and yet just a small bit faster than Blitz. Winner: PHP Conclusion Blitz won two of the three tests and came in a close second in the last. Of course, one could argue that PHP “won” the first test since there was no need to be tested on instantiation. Considering the short amount of time Blitz has been under active development, its sheer speed is rather amazing. From a templating standpoint, Blitz is the fastest unless you are willing to jump through lots of little hoops to make standard PHP includes work for you, and even at that point, the performance as far as total page generation time goes is roughly equal, though native PHP may have a slight advantage. However, unfortunately, the very strength of Blitz (it being written in C and compiled into a PHP extension) is its greatest weakness. Because so many websites are served off shared hosts without the ability of users to use external extensions, most of the community will never have the ability to take advantage of Blitz. Only those with access to the machine, or more specifically the php.ini file, will have the ability to use Blitz unless it were to be merged into the PHP tree. Even in the best case, considering how many shared hosts are still running PHP4, I wouldn’t expect to see anything like this soon, if ever. Perversely, the very weakness of Smarty (that it is written in PHP and included) is its strength, for the reasons above. Smarty is the slowest templating engine tested, however because it is just PHP, it can be included and run like any other PHP script - meaning all the people on shared hosting can make use of it with a minimum of fuss. And in Smarty’s defense, there are many features (such as template variable modifiers) Smarty has that are simply not available in Blitz. These features come with the tradeoff of a massive loss in speed. It was honestly surprising to me how slow it was. Ultimately, it is the decision of the programmer as to what is the right method to use. If you want the advantages of templating as far as seperation of concerns and ease of maintenance and you have the ability, Blitz is probably a good choice for you. If you still want the ease of maintenance and separation of concerns provided by templating and are willing to make the tradeoff for a massive loss of speed, Smarty is a possibility also. If sheer pure speed is your primary concern and you’re not willing to make any kind of tradeoffs, going with raw PHP is probably your best option provided you fine tune it a bit to get the absolute best performance out of it.
Read More
Asterisk

AGI + PHP: Using PHP to route phone calls!

Hello there! I figure that if I’m going to start using this blog to post the wanderings and wonderings of a mid-level engineer at a dot-com company (I work at dealnews to be specific, and I guess I should include the standard disclosure that my employer does not endose or support anything that I say/do here), perhaps I should give some substance to my first post. So, I figure I would write a post on something I have plenty of experience with: PHP. But what to write about? Surely, there must be ten million PHP tutorials on the ‘net and I don’t need to add to the noise already out there as to what are/aren’t the best practices using PHP, so I thought about using PHP in some lesser known areas. And here is one lesser known, but very cool area: you can use PHP to route phone calls! At a previous employer, I worked with Asterisk as a software development consultant. My primary role was to build web interfaces to Asterisk (and other telecom hardware) backends, though while working as a consultant I learned quite a bit about extending Asterisk to do crazy cool things. “It’s Just Software!” Asterisk is an open-source software PBX that was created by Mark Spencer (an Auburn grad and now CEO at digium). It is quickly becoming a challenger in the PBX market (fact: we use it at dealnews), and an entire industry has sprung up around Asterisk and open-source IP telephony. For the purposes of this tutorial, I’m going to assume that you already have Asterisk installed and configured to your liking, and are now wishing to extend it beyond what it is capable of doing with the builtin dialplan applications. If this is not a good assumption in your case, may I highly suggest the Asterisk Tutorial at voip-info.org, or even better, the O’Reilly Asterisk book, which is a little dated but still quite relevant to most beginner-level stuff. Meet AGI, CGI’s hard-working cousin: AGI, or the Asterisk Gateway Interface, is the key to extending Asterisk beyond what it is capable of doing on its own. AGI gives Asterisk the ability to run and interact with scripts and programs outside of Asterisk. AGIs can be written in any language that can be executed on a Linux system (and there have been AGIs written in PHP, Python, Perl, C, Bash and just about every other language out there). Since PHP is my language of choice, that is what I’m going to concentrate on in this tutorial. Asterisk AGIs are actually incredibly simple creatures. When run from within the Asterisk dialplan, they simply send commands to Asterisk using standard output and read the results on standard input. Its what happens between those that is really, really cool. Enough Talk! Code or GTFO! So, let’s get started! First, you need to set up your script environment. I recommend doing this in an include-able file so that you can reuse it in future AGIs. There are a few commands you need to know about: <?php // This turns on implicit flushing, meaning PHP will flush the buffer after // every output call. This is necessary to make sure that AGI scripts get their // instructions to Asterisk as soon as possible, rather than buffering until // script termination. ob_implicit_flush(true); // This sets the maximum execution time for the AGI script. I usually like to // keep this set low (6 seconds), because the script should complete pretty // quickly and the last thing we want it to do is hang a call because the script // is churning. set_time_limit(6); //This sets a custom error handler function. We'll get back to this later. set_error_handler("error"); //This creates a standard in that can be used by our script. $in = fopen("php://stdin","r"); //This creates an access to standard error, for debugging. $stdlog = fopen("php://stderr", "w"); ?> Okay, that’s not too bad! Now, we’re going to do a little more advanced stuff. Every time an AGI script executes, Asterisk passes a number (about 20) values to the script. These AGI headers take the form of “key: value”, one per line separated with a line feed (\n), concluding with a blank line. Before we can do this, we need to write a few functions to read from AGI input, write to Asterisk, Execute commands, and write to the Asterisk CLI. These are the functions I use: <?php function read() { global $in, $debug, $stdlog; $input = str_replace("\n", "", fgets($in, 4096)); if ($debug){ fputs($stdlog, "read: $input\n"); } return $input; } ?> So what are we doing here? Well, the first line, we strip out the line feed in each chunk we get from stdin. Then, we check to see if $debug is set and, if so, echo what we read to standard error. Finally, we return the line we just read. Pretty simple, right? Well, this little funtion will save you lots of time. Next, we need a way to write data: <?php function write($line) { global $debug, $stdlog; if ($debug) { fputs($stdlog, "write: $line\n"); } echo $line."\n"; } ?> This function is even more simple: it just writes out to standard error if $debug is on, and outputs whatever was sent to it with an additional new line. This next function, however, is more complex. <?php function execute($command) { global $in, $out, $debug, $stdlog; write($command); $data = fgets($in, 4096); if (preg_match("/^([0-9]{1,3}) (.*)/", $data, $matches)) { if (preg_match('/^result=([0-9a-zA-Z]*)( ?\((.*)\))?$/', $matches[2], $match)) { $arr['code'] = $matches[1]; $arr['result'] = $match[1]; if (isset($match[3]) && $match[3]) { $arr['data'] = $match[3]; } if($debug) { fputs($stdlog, "CODE: " . $arr['code'] . " \n"); fputs($stdlog, "result: " . $arr['result'] . " \n"); fputs($stdlog, "result: " . $arr['data'] . " \n"); fflush($stdlog); } return $arr; } else return 0; } else return -1; } ?> Woah, complex! Well, not really. execute() is the swiss army knife of AGI programming: it allows you to do interactive stuff inside this AGI script. First, as you can see, it calls the write() function we just wrote, writing an AGI command to Asterisk. Then it looks for a response on standard in. A response from Asterisk takes the form of “result=<result> <data>”. So, we use preg_match to get this out for us and put it into something usable. We do the debug output again, then return the array or 0 or -1 in the event of failures. Just two more functions to go: <?php function verbose($str,$level=0) { $str=addslashes($str); execute("VERBOSE \"$str\" $level"); } function error($errno,$errst,$errfile,$errline) { verbose("AGI ERROR: $errfile, on line $errline: $errst"); } ?> As you can see, these two functions are very simple. One gives verbose output to the Asterisk CLI, and the other is the error function we declared using set_error_handler above. Back to reading in variables. Now that we have the ability to read in, let’s read in the default variables that are passed to the script by Asterisk. We do this using the following code chunk: <?php while ($env=read()) { $s = split(": ",$env); $key = str_replace("agi_","",$s[0]); $value = trim($s[1]); $_AGI[$key] = $value; if($debug) { verbose("Registered AGI variable $key as $value."); } if (($env == "") || ($env == "\n")) { break; } } ?> This creates an $_AGI associative array (in the spirit of $_POST, $_GET, etc) for you to use containing all the items Asterisk passed in. For each read() line, in the first line we split it to get the key and value (this could probably have been done better with a regular expression, but I got a copy of some AGI code from a friend and modified it many moons ago before I began using regular expressions). Then, we strip out the “agi_” that Asterisk adds to the key because it is superfluous, and trim out the spaces and other garbage from the value, adding them to an array. Putting It All Together: Congratuations! You now have all the tools necessary to write an AGI! I suggest (as above) putting those in an include so you can reuse as necessary. So what next? Now, you write an AGI script! Let’s start with a simple example: #!/usr/bin/php <?php include "agi.php"; execute("SAY DATETIME #"); ?> That simple! Of course, all this AGI does is read the date and time to the caller, then exit, but it just shows that AGIs can do really powerful things, really simply. “Calling” you AGI: So now you have this AGI written and you want to use it, but you don’t know how. Well this is pretty easy too! AGIs should be placed in whatever directory you define for “astagidir” in your asterisk.conf file. Unless you changed it, this will be /var/lib/asterisk/agi-bin. Next, be sure that the file is executed by setting the executable bit “chmod +x ". You may also have to fiddle with the permissions: the asterisk user or group need the ability to read and execute the script. Then, you just call it from your dialplan, like so: exten => 1000,1,AGI(<filename>)` Now, after you “extensions reload” of course, you should be able to dial 1000, and watch your AGI spring into action! A more complex example: This is an AGI I wrote at dealnews when someone in the office requested the ability to custom set names to caller IDs and have it work on all phones. Keep in mind that this is only half of the solution (the other half is a web interface). #!/usr/bin/php <?php include "agi.php" ; $db=mysql_connect('redacted', 'redacted', 'redacted'); if(!$db) {     verbose("Could not connect to DB!"); } if(!mysql_select_db('redacted', $db)) {     verbose("Could not use DB!"); } $res=mysql_query(sprintf("select substitution_name from cid_substitution where from_number='%s'",$_AGI['callerid'])); $result=mysql_fetch_row($res); if(sizeof($result)) {     execute(sprintf("SET CALLERID \"%s <%s>\"",$result[0],$_AGI['callerid'])); } mysql_close($db); ?> This demonstrates one of the main advantages to using AGIs, and PHP in particular: the ability to easily interact with databases. In this program, I’m using the caller ID supplied by the carrier to fetch a corresponding name from a database and send it back along with the call. Routing calls is accomplished by calling the EXEC function with DIAL, giving you the ability, with a little work, to route calls based on the database. Pretty neat for a language thought of only as web coding. Indeed, there is a large list of commands that AGIs can use, and variables passed into them, available here. Help! It doesn’t work! Relax! Problems happen from time to time. One of the most common faults is forgetting to set the +x bit on the file to make it executable. Permissions problems are also relatively common. For More Information: voip-info.org - a.k.a. “the wiki,” is the major information repository for Asterisk knowledge specifically, and IP telephony in general.
Read More
Ramblings

Virginia Tech

You know, I’m only a little over two years removed from college. I still remember what living in and around a college campus, in a college town, is like. Hell, I really miss it - I miss the hell out of Auburn. I miss the community feel; biking to campus, taking classes, hanging out with friends, going to bars and just the general feel of the area.
Read More

2006

Ramblings

Swept Away

It seems so fitting, and yet I didn’t even realize I had done it. And now that I realize it, I’m a bit sad. As of yesterday, I’ve been out of college for two years. It seems fitting then, that yesterday I finally cut the last remaining tie I had to Auburn and gave up the 334 cellphone number I’ve had for six years in favor of a more functional 256 Huntsville number. Yeah, it’s just a number, but it’s still a little sad to me. Hell, I don’t even pay bills to the University anymore, most of my friends have gone on or graduated, and I’m going on two years in Huntsville, but that number was the last reminder of college and of not having responsibilities. On the plus side, I did get a slick new Motorola Razr, though I feel like I’m Will Smith in Men In Black and I’m gonna break this thing. I hope it’s better than the LG it is replacing.
Read More
Ramblings

James Kim

So, for those who have either been following the James Kim saga or have been forced to because it was every other item on digg, you likely know he has been found dead. We won’t know a final cause of death until an autopsy is performed, but I have no doubt that it will show he died of hypothermia and exposure to freezing temperatures. This is going to sound insensitive, but I’m going to say it because it needs to be said: if there is a poster child for having done every possible thing wrong in trying to survive in an emergency situation, James Kim is it. I’m sorry that he died, but he went into a situation unprepared and once there made the absolute wrong choices. I just hope everyone else learns from his mistakes.
Read More
Auburn

Going Back to Auburn

I went down to Auburn for the game yesterday. Had a great time; Auburn let the game be more interesting then they should, but took care of things in the second half to cruise to a 38-7 win over Buffalo. Except for a brief visit in 2005, this is the first time I’d spent any significant amount of time in Auburn since I moved to Huntsville. It’s only been two years. It might as well have been twenty, because I hardly recognized the place. Roads are closed, new buildings are being constructed, and lots of activity is taking place. There are two giant buildings downtown that weren’t even in sight when I was there.Everything has changed so much. It felt strange, walking around Auburn. I saw four wonderful years of my life staring back at me as thought I had walked away from something unfinished. Almost like there’s some studying that needed to be done or a party to go to. As I walked around campus, in spite of how much had changed, I noticed how much had stayed the same. I saw a black bike parked outside Cary Hall and a freshman cursing because he has an 8PM biology lab and is missing Babylon 5. As I walked down towards the Extension - my dorm complex for my first two years at Auburn - I walked past a very familiar parking space and make note of all the changes. On one side of the complex is a brand new building that wasn’t even there when I lived there - it was a parking lot. The Village Kitchen - the place I ate so many meals - is now gone as well. But I only saw that for a second.Then I looked closer and saw a sophomore struggling to carry his laundry and books to the laundry room so that he could study while he waited for the dryers that never seemed to work quite right. As I stood in the stadium, I could almost feel the junior within me; with two of his fraternity brothers within him, drinking smuggled-in alcohol and talking at length about what Coach Tuberville was doing wrong at the half. During my time at Auburn, I was a frequent poster on the computer message boards of the school newspaper, the Auburn Plainsman.I remember one particular thread when discussing as we often did the endless administrative corruption that we were so fond of. We all saw the ghosts in the cupboard and then congratulated ourselves on being smart enough to see them. The topic got onto the perceived lack of alumni involvement in anything other than athletics, and I remember saying then that “having a piece of paper entitles you to only care about football.” And as I walked around Auburn yesterday, I came to understand how completely wrong I was. It’s not that we as alumni don’t care about our alma mater. It’s not a lack of caring, but a different perception. We don’t see the problems that students see because we don’t see Auburn as the current students see it. We see Auburn as it was for us. We see Auburn through the wide eyes of a freshman trying to find a room in Haley Center with only five minutes until class. We see Auburn as hanging out with friends in Foy, or band parties at fraternity houses, or late night study sessions and trips to coffee shops. We know the bars as they were for us (The Blue Room, Finks (before it was whatever it is now and before it was Tigris), etc). Our memories have glossed over any problems we faced to leave only the perfect image of four wonderful years. We see Auburn as a football game with friends on a warm autumn eve under a sky of orange and blue. I still miss college, and I miss Auburn. But more, I miss the Auburn that was for me. Maybe that’s why it hurts when I go back and see how much things have changed. I have this image in my heart of Auburn as it was when I drove down in August of 2000. I guess it hurt when I went back and saw how much it has changed. Auburn is moving on without me. And it hurts that, no matter how much I want to, I can never go back. “The arrow of time points in one direction only.”
Read More
Ramblings

Submitted For Your Consideration...

Looking back over the historical record, one thing becomes clear: it is what is tangible that has defined our view of history. We go back all the way to the pyramids and tombs of Egypt, and we can read the hieroglyphs on the walls. The edifices themselves tell us stories of their builders. The Greeks and Romans produced copious amounts of literate for us to consume, and their structures still stand as a testament to the collective genius of their civilizations. Is it possible, then, that we could be living in one of the worst documented times in human history? A time that future historians, thousands of years from now, will regard as a “dark period” because of a lack of any real record of the era? Let it be said that more literature is being produced than ever before. Mass printing has completely changed the dynamics; now, almost anyone can produce anything simplistically. Modern construction methods have rendered the craft of the ancient stonemasons simple in that what once took years to be built can now literally be built in a matter of weeks. Is any of this durable, though? Will it last? So much of what we do now is on computers - the irony of writing this warning on a digital journal does not escape me, by the way - and once something is wiped from the magnetic memory of a hard disk it is gone forever. There is no storing in clay jars for a hard disk. Lots of things are being produced these days, but will any of it last? What will historians two millennia from now have to say about us as a civilization - of course assuming humanity is around at all, and that we haven’t destroyed ourselves in nuclear war or massive climate change, or been wiped out my Thor’s Hammer. Always best to end on a high note.
Read More
Linux

Gentoo Gripes

One of my big complaints about Gentoo is how they can’t seem to do the same thing on two different days. Portage is easy until they mess with it. Take, for instance, MySQL. I was upgrading PHP my test box to 5.1, and I figured I would go ahead and upgrade to My5 to take advantage of all the new features in some of the apps I’m working on. Unfortunately, someone at Gentoo who builds the MySQL ebuilds decided to do some weird “slotting” thing wherein they allow you to have multiple MySQL installations on the same box. So Portage was installing everything as “mysql-500” instead of “mysql” like it should. It also didn’t install a corresponding init script, making it essentially useless lest I have to go make my own init. In Googling about to find a solution for the problem, I find that “Due to the negative response from our user base, the MySQL team has decided to go back to unslotted MySQL.” They simply haven’t delegated the updated packages to all the mirrors yet (I synced before attempting) and still have the packages masked. So I had to unmerge the MySQL package I had installed, unmasked the working unslotted packages, and reemerge the newer “unslotted” version. This really sucks because this situation should never have happened. A change like this should never have been merged into the main tree without having been tested among a group of users to find their input. Instead, this package was put into the main tree to wait for the general userbase’s comments. It’s what I call the “Microsoft Method” of software development: why bother with testing when you can have your users test it for you?
Read More

2005

Ramblings

Supreme Court Ruling on Eminent Domain

Today, the Supreme Court ruled that governments can seize private property for private development. What the supreme court has effectively done is given the power to wealthy corporations and businesses to stomp all over individual families and competition. Let’s play a little exercise, shall we… Let’s say you own a farm. It’s farmland that’s been in your family since the 1800s. Your great-great-grandfather worked this land, and everyone in your family has worked it since. At one time, it was way, way out from the city but in the years urban growth has sprawled out closer and closer to your land. Along with sprawl has come subdivisions and shopping centers. Now, Wal-Mart has seen fit to build a super-center in town, and has chosen your land. Under this new ruling, Wal-Mart can tap-tap-tap on the shoulder of the city council and say, “Hey, we want that land over there, look at all the tax money we’re going to bring in” and the city council can kick you off your land using eminent domain provisions that were once reserved for building roads and schools, and give you whatever arbitrary amount they decide is “market value” for your property. Instead of forcing Wal-Mart to compete and pay true value for the land, they can now leverage the city government against you and get the land for fractions of what it is worth. This is one of the worst rulings I’ve ever seen come out of the Supreme Court and a complete kick in the balls to individual liberty in this country.
Read More
Ramblings

The Terrorists Won

Ladies and gentleman, I humbly submit to you that the terrorists have won. We need to just go ahead and hand our country over to Osama bin Laden, because we’ve already given up and given the terrorists exactly what they want. And if you don’t know, I’ll spell it out for you: WE’RE AFRAID! You can see it in how we talk and in how we act. Nascar dads and soccer moms all over the country get their knickers in a twist every time the terror alert level goes from chartreuse to polka dot because WE’RE AFRAID the terrorists might attack suburbia. People buying plastic bags and packing survival kits like we’re at the height of the Cold War again. For God’s sake, people! I don’t think I realized how bad it had gotten until this week. See, when I was a kid and I first came to Huntsville and visited the U.S. Space and Rocket Center, there was a lot more to it than there is now. In addition to the museum (which is still hella cool) and all the rockets outside, they used to have a tour of the Marshall Spaceflight Center. They put you on a bus and drove you around to see the nuts and bolts of how the real NASA functions. You got to see the old rocket test stands where Saturn V engines were tested. You got to see the “world’s flattest floor” in a clean room which they used to train astronauts on maneuvering objects in space and on the lunar surface. You got to see real astronauts training in the real neutral buoyancy simulator (the big pool) learning how to work in space. You got to see pieces of the International Space Station being assembled. I remember all this because it made a HUGE impression on me as a kid. Seeing all the grand history of NASA and the bold, forward looking vision of all these geeky engineers convinced me that it was okay to accept my geekiness and be a nerd. After all, these people put a man on the moon. What had jocks and cool people accomplished that could come even close to that? For all their nerdiness, surrounded by the best technology America could come up with and given a near impossible mission and an unforgiving timetable, these people accomplished the single greatest feat in human history. Even today they were working in the background on amazing things, and I was seeing it all happen right before my eyes! I weep now. I truly do. I weep for the nerdy kid whose parents bring him to the U.S. Space and Rocket Center today. Why? Because when Sarah and I went up there on Tuesday, we discovered the awful truth: they no longer conduct tours of the Marshall Spaceflight Center! Hanging on the front of each computer monitor at the admission area were the little white surrender flags that you see seemingly everywhere now: “Due to the events of September 11, 2001, tours of the Marshall Spaceflight Center are no longer conducted.” It just saddens me so that this country is so utterly terrified of terrorists that we have to protect the world’s flattest floor and some old rocket test stands from that evil terrorist little Johnny the geek.
Read More
Ramblings

My boy, we are pilgrims in an unholy land...

Well, I’m here. My cable doesn’t work, but fortunately, some kind soul has left their access point open for me to use! :P There’s a lot of work that’s going to need to be done in here to bring this place up to my standards, but it’s not bad. The drive up was mostly uneventful except for in Oak Grove when a semi tried to play chicken with me, and in Birmingham when traffic was pretty bad. Dad and Granddad got me unloaded in record time, then we went out and had pizza and beer before returning the truck a whole day early. Now. Where am I going to put all this stuff?
Read More