2010 Posts

Jump to a specific month:

Randomness
Those of you who follow me on Twitter might have noticed me railing against a company called FlightPrep. You may be wondering, what exactly is the big deal? The short of the story is, there were a bunch of websites out there dedicated to flight planning. Some of the best ones (SkyVector, Flyagogo, NACOmatc and, best of all RunwayFinder) allowed you to plot a course overlaying a VFR Chart the same way you would in Google Maps. You could modify your route simply by dragging it about, and click airports along the route to get current weather reports. It was kinda like Google Maps for preflight intelligence. Well, along comes this company called FlightPrep, who decided they weren’t getting rich enough (just ignore the owner’s $500k boat). So they convinced the USPTO to give them a patent on, bluntly, drawing digital lines on a digitized chart. The filed for the patent in 2005 (after a number of the sites above were already online), but used legal sleight-of-hand to get it backdated to 2001. Eventually, after a number of rejections, they were able to find a friendly clerk and were awarded the patent. They then immediately lawyered up and started going after all of these free flight planning websites, many of which were simply hobbies of some pilots who also happened to know how to program. They requested that these sites “license the technology” (what a ludicrous thing to say, being that the sites pre-dated FlightPrep’s patent) or face lawsuits with damage claims of $149 per unique IP per month. So what happened? SkyVector settled and “licensed.” NACOmatic, Flyagogo and RunwayFinder all shut down under threat of lawsuit. They’ve also gone after FlightAware, Jeppesen and the AOPA with no success, so far. It’s pretty clear that, instead of innovating, they’re litigating. Rather than develop some radical new technology, they’re abusing the patent system in an attempt to corner the market. Bluntly, I’m pissed because they robbed me of a tool (RunwayFinder) that I loved and that was highly useful for a student pilot. But, general aviation is a small community, and the backlash against FlightPrep has been a beautiful if small-scale example of what happens when you abuse your target market. Within the course of a week, they’ve become a pariah and the most hated company in general aviation. They had to close off their Facebook page because it was being overrun with people voicing their opinion, and their products are receiving highly negative reviews in all markets. But, while this is all great, it doesn’t bring back RunwayFinder. Even though Dave from RunwayFinder has decided to fight back, he faces a long uphill climb to have this asinine patent thrown out. In the end, it’s just sad. As I said, GA is a small community where nobody is getting rich. We’re all supposed to be on the same team.
Read More
Apache
The goal of this project were twofold: To completely eliminate the need for me to touch the phone to provision it. I want to be able to create a profile for it in the database, then simply plug the phone in and let it do the rest. And… To eliminate per-phone physical configuration files stored on the server. The configuration files should be generated on the fly when the phone requests them. So the flow of what happens is this: I create a profile for the phone in the database, then plug the phone in. Phone boots initially, receives server from DHCP option 66. Script on the server hands out the correct provisioning path for that model of phone. Reboots with new provisioning information. Phone boots with new provisioning information, begins downloading update SIP application and BootROM. Reboots. Phone boots again, connects to Asterisk. At this rate, provisioning a phone for a new employee is simply me entering the new extension and MAC address into an admin screen, and giving them the phone. It’s pretty neat. **Note: **there are some areas where this is intentionally vague, as I’ve tried to avoid revealing too much about our private corporate administrative structure. If something here doesn’t make sense or you’re curious, post a comment. I’ll answer as best I can. Creating the initial configs I used the standard download of firmware and configs from Polycom to seed a base directory. This directory, on my server, is /www/asterisk/prov/polycom_ipXXX, where XXX in the phone model. Right now we deploy the IP-330, IP-331 and IP-4000. While right now the IP-330 and IP-331 can use the same firmware and configs, since the IP-330 has been discontinued they will probably diverge sometime in the not too near future. With the base configs in place, this is where mod_rewrite comes into play. I added the following rewrite rules to the Apache configs: RewriteEngine on RewriteRule ^/000000000000\.cfg /index.php RewriteRule /prov/[^/]+/([^/]+)-phone\.cfg /provision.php?mac=$1 [L] RewriteRule /prov/polycom_[^/]+/[^/]+-directory\.xml /prov/polycom_directory.php` RewriteCond %{THE_REQUEST} ^PUT* RewriteRule /prov/[^/]+/([^/]+)\.log /prov/polycom_log.php?file=$1` To understand what these do, you will need to take apart the anatomy of a Polycom boot request. It requests the following files in this order: whichever bootrom.ld image it’s using, [mac-address].cfg if it exists or 000000000000.cfg otherwise, the sip.ld image, [mac-address]-phone.cfg, [mac-address]-web.cfg, and [mac-address]-directory.xml. So, we’re going to rewrite some of these requests to our scripts instead. Generating configs on the fly We’re going to skip the first rewrite rule (we’ll talk about that one in a little bit since it has to do with plug-in auto provisioning). The one we’re concerned with is the next one, which rewrites [mac-address]-phone.cfg requests to our provisioning script. So each request to that file is actually rewritten to provision.php?mac=[mac-address]. Now, in the database, we’re keeping track of what kind of phone it is (an IP-330, IP-331 or IP-4000), so when a request hits the script, we look up in the database what kind of phone we’re dealing with based on the MAC address, and use the variables from the database to fill in a template file containing exactly what that phone needs to configure itself. For example, the base template file for the IP-330 looks something like this: <sip> <userinfo> <server <?php foreach($phone as $key => $p) { ?> voIpProt.server.<?php echo $key+1 ?>.address="<?php echo $p["host"] ?>" voIpProt.server.<?php echo $key+1 ?>.expires="3600" voIpProt.server.<?php echo $key+1 ?>.transport="UDPOnly" <?php } ?> /> <reg <?php foreach($phone as $key => $p) { ?> reg.<?php echo $key+1 ?>.displayName="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.address="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.type="private" reg.<?php echo $key+1 ?>.auth.password="<?php echo $p["secret"] ?>" reg.<?php echo $key+1 ?>.auth.userId="<?php echo $p["name"] ?>" reg.<?php echo $key+1 ?>.label="<?php echo $p["first_name"] ?> <?php echo $p["last_name"] ?>" reg.<?php echo $key+1 ?>.server.1.register="1" reg.<?php echo $key+1 ?>.server.1.address="<?php echo $p["host"] ?>" reg.<?php echo $key+1 ?>.server.1.port="5060" reg.<?php echo $key+1 ?>.server.1.expires="3600" reg.<?php echo $key+1 ?>.server.1.transport="UDPOnly" <?php } ?> /> </userinfo> <tcpIpApp> <sntp tcpIpApp.sntp.address="pool.ntp.org" tcpIpApp.sntp.gmtOffset="<?php echo $tz ?>" /> </tcpIpApp> </sip> The script outputs this when the phone requests it. Voila. Magic configuration from the database. There’s a little bit more to it than this. A lot of the settings custom to the company and shared among the various phones are in a master dealnews.cfg file, and included with each phone (it was added to the 000000000000.cfg file). Now, on to the next rule. Generating the company directory Polycom phones support directories. There’s a way to get this to work with LDAP, but I haven’t tackled that yet. So, for now, we generate those dynamically as well when the phone requests any of its *-directory.xml files. This one’s pretty easy since 1) we don’t allow the endpoints to customize their directories (yet), and 2) because every phone has the same directory. So all of those requests go to a script that outputs the XML structure for the directory: <directory> <item_list> <?php if(!empty($extensions)) { foreach($extensions as $key => $ext) { ?> <item> <fn><?php echo $ext["first_name"]?></fn> <ln><?php echo $ext["last_name"]?></ln> <ct><?php echo $ext["mailbox"]?></ct> </item> <?php } ?> <? } ?> </item_list> </directory> We do this for both the 000000000000-directory.xml and the [mac-address]-directory.xml file because one is requested at initial boot (the 000000000000-directory.xml file is intended to be a “seed” directory), whereas subsequent requests are for the MAC address specific file. Getting the log files Polycoms log, and occasionally the logs are useful for debug purposes. The phones, by default, will try to upload these logs (using PUT requests if you’re provisioning via HTTP like we are). But having the phone fill up a directory full of logs is ungainly. Wouldn’t it be better to parse that into the database, where it can be easily queried? And because the log files have standardized names ([mac-address]-boot/app/flash.log), we know what phone they came from.Well, that’s what the last two rewrite lines do. We rewrite those PUT requests to a PHP script and parse the data off stdin, adding it to the database. A little warning about this. Even at low settings Polycom phones are chatty with their logs. You may want to have some kind of cleaning script to remove log entries over X days old. Passing the initial config via DHCP At this point, we have a working magic configuration. Phones, once configured, fetch dynamically-generated configuration files that are guaranteed to be as up-to-date as possible. Their directories are generated out of the same database, and log files are added back to the same database. It all works well! … except that it still requires me to touch the phone. I’m still required to punch into the keypad the provisioning directory to get it going. That sucks. But there’s a way around that too! By default, Polycom phones out of the box look for a provisioning server on DHCP option 66. If they don’t find this, they will proceed to boot the default profile thats ships with the phone. It’s worth noting that, if you don’t pass it in the form of a fully-qualified URL, it will default to TFTP. But you can pass any format you can add to the phone. if substring(hardware, 1, 3) = 00:04:f2 { option tftp-server-name "http://server.com"; } In this case, what we’ve done is look for a MAC address in Polycom’s space (00:04:f2) and pass it option 66 with our boot server. But, we’re passing the same thing no matter what kind of phone it is! How can we tell them apart, especially since, at this point, we don’t know the MAC address. The first rewrite rule handles part of this for us. When the phone receives the server from option 66 and requests 000000000000.cfg from the root directory, we instead forward it on to our index.php file, which handles the initial configuration. Our script looks at the HTTP_USER_AGENT, which tells us what kind of phone we’re dealing with (they’ll contain strings such as “SPIP_330”, “SPIP_331” or “SSIP_4000”). Using that, we selectively give it an initial configuration that tells it the RIGHT place to look. <?php ob_start(); if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_330")) { include "devices/polycom_ip330_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SPIP_331")) { include "devices/polycom_ip331_initial.php"; } if(stristr($_SERVER['HTTP_USER_AGENT'], "SSIP_4000")) { include "devices/polycom_ip4000_initial.php"; } $contents = ob_get_contents(); ob_end_clean(); echo $contents; ?> These files all contain a variation of my previous auto-provisioning configuration config, which tells it the proper directory to look in for phone-specific configuration. Now, all you do is plug the phone in, and everything else just happens. A phone admin’s dream. Keeping things up to date By default, the phones won’t check to see if there’s new config or updated firmware until you tell them to. But his also means that some things, especially directory changes, won’t get picked up with any regularity. A quick change to the configs makes it possible to schedule the phones to look for changes at a certain time: <provisioning prov.polling.enabled="1" prov.polling.mode="abs" prov.polling.period="86400" prov.polling.time="01:00" /> This causes the phones to look for new configs at 1AM each morning and do whatever they have to with them. Conclusions The reason all this is possible is because Polycom’s files are 1) easily manipulatable XML, as opposed to the binary configurations used by other manufacturers, and 2) distributed, so that you only need to actually send what you need set, and the phone can get the rest from the defaults. In practice this all works very well, and cut the time it used to take me to configure a phone from 5-10 minutes to about 30 seconds. Basically, as long as it takes me to get the phone off the shelf and punch the MAC address into the admin GUI I wrote. I don’t even need to take it out of the box!
Read More
Apache
I’ve been using Google Chrome as my primary browser for the last few months. Sorry, Firefox, but with all the stuff I need to work installed, you’re so slow as to be unusable. Up to and including having to force-quit at the end of the day. Chrome starts and stops quickly But that’s not the purpose of this entry. The purpose is how to live with self-signed SSL certificates and Google Chrome. Let’s say you have a server with a self-signed HTTP SSL certificate. Every time you hit a page, you get a nasty error message. You ignore it once and it’s fine for that browsing session. But when you restart, it’s back. Unlike Firefox, there’s no easy way to say “yes, I know what I’m doing, ignore this.” This is an oversight I wish Chromium would correct, but until they do, we have to hack our way around it. Caveat: these instructions are written for Mac OS X. PC instructions will be slightly different at PCs don’t have a keychain, and Google Chrome (unlike Firefox) uses the system keychain. So here’s how to get Google Chrome to play nicely with your self-signed SSL certificate: On your web server, copy the crt file (in my case, server.crt) over to your Macintosh. I scp'd it to my Desktop for ease of work. ** These directions has been updated. Thanks to Josh below for pointing out a slightly easier way.** In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says “Certificate Information.” Click and drag the image to your desktop. It looks like a little certificate. Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it. Be sure you add the certificate to the System keychain, not the login keychain. Click “Always Trust,” even though this doesn’t seem to do anything. After it has been added, double-click it. You may have to authenticate again. Expand the “Trust” section. “When using this certificate,” set to “Always Trust” That’s it! Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser. This is one thing I hope Google/Chromium fixes soon as it should not be this difficult. Self-signed SSL certificates are used **a lot **in the business world, and there should be an easier way for someone who knows what they are doing to be able to ignore this error than copying certificates around and manually adding them to the system keychain.
Read More
Asterisk
At dealnews, as I’ve written before, we run Asterisk as our telephone system. I find it to be a pretty good solution to our telecom needs: we have multiple offices and several home-based users. And, for the most part, for hard telephones, we use Polycoms. We run mostly IP-330s, with a couple of IP-4000s and a few new IP-331s. We also have softphones, a couple of PAP2s and a couple of old Grandstreams from our original Asterisk deployment in 2007 that I’m desperately trying to get out of circulation. But it’s mostly Polycoms. Recently, I changed how we were doing provisioning. I’ll write a more in-depth post about this later, but the short of it is that since Polycom phones use XML for their configuration information, we now generate them dynamically instead of creating a configuration file. It’s what I should have done back in 2007 when we bought our first round of Polycoms. But this presented me with a problem: how do I re-provision the older phones - some of which I don’t have easy physical access to (at least that doesn’t involve an airplane ride) - to use the new configuration system? In doing some research, I discovered that Polycom allows you to set, via certain commands, the provisioning server from within a config. With this information, I crafted a custom re-provisioning config that looks like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <deviceSettings> <device device.set="1" device.dhcp.bootSrvUseOpt.set="1" device.dhcp.bootSrvUseOpt="2" device.net.cdpEnabled.set="1" device.net.cdpEnabled="0" device.prov.serverType.set="1" device.prov.serverType="2" device.prov.serverName.set="1" device.prov.serverName="server"/> </deviceSettings> And included it at the top of the 000000000000.cfg file (one of the default files downloaded by each Polycom phone): <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <APPLICATION APP_FILE_PATH="sip.ld" CONFIG_FILES="update.cfg, phone1.cfg, sip.cfg" MISC_FILES="" LOG_FILE_DIRECTORY="" OVERRIDES_DIRECTORY="" CONTACTS_DIRECTORY=""/> Then, using Asterisk, I issue the check-config command: asterisk*CLI> sip notify polycom-check-cfg peer The phone should reboot, pick up its new config, then reboot again with with proper new provisioning information from the new provisioning provider. Next post, I’ll show you how to use PHP and mod_rewrite to eliminate the need for per-phone config files.
Read More
Apple
My home entertainment center is probably second only to my computer(s) inn “things I interact with every day.” Barely a day goes by when I don’t spend a little relaxing time watching TV or movies. I have a Hitachi 1080p 42-inch television, an Onkyo receiver attached to a 5.1 surround sound system (Polk Audio subwoofer and Energy speakers), a DVD player (that rarely gets any use anymore), a VCR (that gets even less use) and a PlayStation 3. But the star, and my single favorite piece of equipment in my living room is my AppleTV. Yup. My AppleTV. You might be asking why I profess love for a device that many people consider to be a failure. After all, the way some people, including some of my coworkers, talk about this device, you’d think it was Battlefield Earth bad. The kind of bad that you ask for your money back after using. The kind of bad that makes you regret waking up that day, and makes you want to drown your sorrows with a pitcher of Natural Ice. And yet I, as an AppleTV owner, am trilled with it. I love it simply because of its typical Apple simplicity: it’s all the best parts of a HTPC without all the bull** that comes with having a HTPC. Powerful enough to be usable, and yet simple enough that my wife - whom I love, but is most definitely not a computer person - can figure it out. It was simple enough to set up that all I had to do was plug it into my TV and get it on the network. And, it integrates incredibly well with the rest of the Apple products in the house. And now, Apple has come out with a new AppleTV, and I could not possibly be more thrilled, because it addresses almost all the issues I had my current AppleTV, and with an upgrade price of $99, it’s a no-brainer. I might buy one for every TV in the house. Let’s go through some of the differences: No onboard storage. I have two AppleTVs. One in the living room - a 160gb model, and one in the bedroom, a 40gb model. You know how much storage space I’m using on them? Zero. Nothing. I stream everything off my iMac upstairs. Sync’ing is slow, and I have way more content than could even fit on the 160gb model. Moreover, streaming from iTunes shares works seamlessly, so there’s really no reason to use the local storage at all. Apple did away with it. No composite. Non-issue. I use all HDMI. The new AppleTV has only three plugs on the back: power, HDMI, and ethernet. Perfect. Movies from the iTunes store are rental-only. I don’t quite agree with this, but it’s not very strong. I never purchased a movie from the iTunes store. But I did rent on more than one occasion, so I don’t foresee this being an issue, especially because of … Netflix support. That’s right. You can stream all the free content on Netflix straight to my AppleTV. This in and of itself is enough reason for me to want to upgrade. In other words, it’s as if Apple fixed the device to exactly reflect how I use my current one. Since Steve Jobs never called me, I can only conclude that there were a lot more people out there using AppleTVs in the way I use mine. Frankly, at this point, the only things that it’s missing that I really wanted were 1080p and Hulu.
Read More
Apple
Two days ago Apple announced Ping: a social network geared towards music sharing. And a bunch of iPods too. Personally, I was more excited by the new AppleTV (I have two of them and absolutely love them) but more on that later. This is about Ping. My thoughts on **Ping: Apple’s first real attempt at social networking reminds me of Google’s countless attempts to get into the social networking space: they’re like that guy that shows up to the party really late - I mean beyond fashionably late - when the party is already over and everyone else is already drunk and thinking about stumbling across the street to IHOP or Taco Bell. They say they were at the library studying and now they want to go out and drink, but the keg has floated, the bars and liquor stores are already closed and all you want to do is eat a burrito supreme and find some sofa to pass out on. Ping is a good first start, but it has some problems: What is the target here? Am I supposed to follow people or artists or both or what? And what are they supposed to do? All this feels like is Twitter or Facebook + iTunes. The people I’m following can share messages and pictures? Yep. Twitter in iTunes. I can like and share and post comments? Yep. Facebook in iTunes. Why not allow independent artists into the fold? Some of my favorite artists (such as Matthew Ebel - check him out if you love piano rock) are independent. Right now there are like 10 artists you can follow, and that Lady Gaga is one of them makes me want to break something. The only ones on there I’m remotely interested in following is Dave Matthews Band and maybe U2. I can’t access it in any way other than in iTunes. No web access. While this means I can fire it up myself on my computer and laptop, and (currently) on my iPhone via the iTunes application, I can’t check Ping at a friend’s house. I can’t go to the Apple store and check Ping. Everything has to go through iTunes, and this absolutely cripples it. Think that’s overkill? Go to the Apple store and watch for  15 minutes how many people walk in and use one of the computers to check Facebook. I can only “like” and “share” content I purchased from iTunes. I have purchased 58 songs from iTunes over the years, out of 3,621 songs in my library. About 1% of my library is available. If Apple fixes these (and other, more minor) problems, Ping could be really cool. The problem is that these aren’t code fixes. They’re not something they can test and roll out a change for. These are conceptual problems relating to what their idea of Ping is versus the what the rest of the world is going to use it as. The question is, will they be Google and throw this out here, not maintain it and mercifully kill it a year later (a la Google Wave and the impending death of Google Buzz), or will they adapt and change it to better suit the needs of the public? Because that’s the thing about social networking: you have to embrace the users thoughts, opinions, and ideas. It’s a lesson digg just learned the hard way and a lesson that frankly, given Apple’s reputation as wanting to control everything, I don’t see them embracing. As a side note, I will however salute Apple for not giving into Facebook if the rumor is true. Facebook plays fast and loose with people’s information, and I really don’t like how it seems to have become the de facto standard for social network usage (and thus the reason you can comment with your Facebook login). That, and Zuckerberg. I hate that guy. Still, Ping is yet another player in this social networking space. A space that is becoming increasingly full … Social Overload I’m already Facebooked, Myspaced and Twittered. I’m LiveJournaled, Wordpressed, and Youtube’d. I’m Flickr’d, LinkedIn’d, Vimeo’d, Last.fm’d and Gowalla’d. I’m on any number of dozens of message boards and mailing lists that predate “Web 2.0” and the social networking “revolution,” and I follow nearly 100 various blogs and other feeds via RSS. They’re on my desktop, on my laptop, on my tablet and in my phone. And now, apparently, I’m Ping’d as well. Le sigh. Now, to be fair, I don’t check all these sites. I last logged into Myspace about 9 months ago. I last used Gowalla about a year ago. I usually only look at Youtube, Flickr or Vimeo when I need something, and haven’t updated a LiveJournal in about 3 years. But at what point does all this interaction - this social networking - become social overload? Are any of these services adding value to my life? And at what point does a social network - Ping, in this case - simply become yet another thing I have to think about and check? Or will it become yet another service I sign up for, try for awhile and ignore?
Read More
PHP
PHP has functions that can compute the difference between two arrays built in. The comments sections for those functions are filled with people trying to figure out the best way to do the same thing with multidimensional arrays, and almost all of them are recursive diffing functions that try to walk the tree and do a diff at each level. The problem with this approach are 1) they are unreliable as they usually don’t account for all data types at each level, and 2) they’re slow, due to multiple calls to array_diff at each level of the tree. A better approach, I think, is to flatten a multidimensional array into a single dimension, make a single call to array_diff, then (if needed) expand it back out if you really need the resulting diff to be multidimensional. Lets look at some code. The following recursive function flattens a multidimensional array into a single dimension. <?php function flatten($arr, $base = "", $divider_char = "/") { $ret = array(); if(is_array($arr)) { foreach($arr as $k = $v) { if(is_array($v)) { $tmp_array = flatten($v, $base.$k.$divider_char, $divider_char); $ret = array_merge($ret, $tmp_array); } else { $ret[$base.$k] = $v; } } } return $ret; } ?> The following function (based on this function found here) reinflates the array back up after it’s been deflated. <?php function inflate($arr, $divider_char = "/") { if(!is_array($arr)) { return false; } $split = '/' . preg_quote($divider_char, '/') . '/'; $ret = array(); foreach ($arr as $key = $val) { $parts = preg_split($split, $key, -1, PREG_SPLIT_NO_EMPTY); $leafpart = array_pop($parts); $parent = &$ret; foreach ($parts as $part) { if (!isset($parent[$part])) { $parent[$part] = array(); } elseif (!is_array($parent[$part])) { $parent[$part] = array(); } $parent = &$parent[$part]; } if (empty($parent[$leafpart])) { $parent[$leafpart] = $val; } } return $ret; } ?> Now, with arrays in flat form, it’s easy to use the built-in functions to diff: <?php $arr1_flat = array(); $arr2_flat = array(); $arr1_flat = flatten($arr1); $arr2_flat = flatten($arr2); $ret = array_diff_assoc($arr1_flat, $arr2_flat); $diff = inflate($ret); ?>
Read More
Apple
So Sunday night, my iMac died. been having strange problems the few months leading up to it. Mostly random freezes. I always notice when they happen because I leave Mail.app running all the time to filter my messages, so when my iPhone would start going crazy, I’d know it had crashed again. It actually happened while I was out of town in Atlanta earlier this year, so all weekend my phone was constantly buzzing. Well, Sunday while we were working in the yard, I had set up a DVD rip job - my current project is digitizing all my DVDs for the AppleTV - to run, and while we were working it randomly reset itself and got all sluggish. That night, I tried to boot of the Snow Leopard DVD to run Disk Utility, and it couldn’t even mount the drive and refused to repair it. Couldn’t reboot either. I tried DiskWarrior, and that fixed things up enough to boot it, but it was REALLY SLOW (it took 10 minutes to boot). It was good enough to get the last few remaining files that hadn’t been backed up yet onto the external drive. Then, I tried reinstalling, and it never came back. My conclusion, since I could still boot fine from the DVD, was dead hard drive. The original hard drive was 500GB, but I figured I’d upgrade while doing this. Ordered a new 1TB hard drive via a deal at work and had it overnighted. It arrived yesterday. And, after some interesting surgery (who says you can’t work on Macs!), got it installed, formatted, and Snow Leopard reinstalled. You know, I remember the first computer I owned that crossed the 1GB barrier, back in late 1999. I guess I’ll have to remember this one, too.
Read More
Apple
Every day, when I get to work, there are a number of tasks I do. Among the first thing I do is connect to a number of servers via SSH. These servers - our development testing, staging, and code rolling servers - are part of the development infrastructure at dealnews. So every morning, I launch iTerm, make three sessions and log into the various servers. Over time, I’ve written some helper scripts to make this faster. My “go” script contains the SSH commands (using keys) to log into these machines so that all I have to do is type “go rpeck” to log into my development machine. Still, this morning, the lunacy of every morning having to open iTerm and execute three commands, every day without fail, struck me. Why not script this so that, when my laptop is plugged into the network at work, it automatically launches iTerm and logs me into the relevant services? Fortunately, iTerm exposes a pretty complete set of AppleScript commands, so with a little work, I was able to come up with this: tell application "System Events" set appWasRunning to exists (processes where name is "iTerm") tell application "iTerm" activate if not appWasRunning then terminate the first session of the first terminal end if set myterm to (make new terminal) tell myterm set dev_session to (make new session at the end of sessions) tell dev_session exec command "/Volumes/iDisk/bin/go rpeck" end tell set staging_session to (make new session at the end of sessions) tell staging_session exec command "/Volumes/iDisk/bin/go staging2" end tell set nfs_session to (make new session at the end of sessions) tell nfs_session exec command "/Volumes/iDisk/bin/go nfs" end tell select dev_session end tell end tell end tell What this little script does is, when launched, checks to see if an instance of iTerm is already running. If it is, it just creates a new window, otherwise creates the first window, then connects to the relevant services using my “go” script (which is synchronized across all my Macs by MobileMe). Then, with it saved, I wrap it in a shell script: #!/bin/bash /usr/bin/osascript /Users/peckrob/Scripts/launch-iterm.scpt And launch it with MarcoPolo using my “Work” rule that is executed when my computer arrives at Work. Works great!
Read More
DD-WRT
In my previous entry, I wrote about how awesome DD-WRT is, and how it had replaced a number of network devices allowing me to reduce the number of machines at home I had to administer. I finished the article by talking about how I’d set up a VPN tunnel to the office so multiple machines - namely, my Macbook Pro and my iMac - could access company resources at the same time. But at the end, I mentioned that PPTP was _not _what I was using to connect myself back to my home network when I’m on the road. But why? Two words: broadcast packets. PPTP, by default, does not support the relaying of broadcast packets across the VPN link.* For Mac users, this means Bonjour/Rendezvous based services such as easily shared computers on a network are not accessible as they rely on network broadcasts to advertise their services. PPTP can support broadcast packets with the help of a program called bcrelay. This program is actually installed on DD-WRT routers even, but does not work even though the DD-WRT web GUI claims that they can support relaying broadcast packets. To verify, you can drop to shell and try yourself: root@Eywa:~# bcrelay bcrelay: pptpd was compiled without support for bcrelay, exiting. run configure --with-bcrelay, make, and install. The version of pptpd that ships with v24sp2 of DD-WRT lacks bcrelay support. It’s important to note that this doesn’t mean the services are completely inaccessible. You can still reach them if you know IP addresses. Good for people with and understanding of networking, but not good for people like my wife and definitely not the “Mac way.” So, what options are left, if no PPTP? Enter OpenVPN OpenVPN is a massively flexible (and therefore massively difficult to configure) open source VPN solution. DD-WRT ships with OpenVPN server available with support for broadcast packets, so that is what I decided to use. A couple of notes before you begin. There are some tradeoffs to using OpenVPN. Perhaps the biggest is that it’s not natively supported on any operating system (unlike PPTP). That means on Windows or Mac, you’ll need a third-party client. And it’s not compatible at all with iPhones, iPods or iPads (unless they’re jailbroken). It is also much more difficult to configure that the relatively easy and reasonably well documented PPTP server setup. It was a worthwile tradeoff for me, but it may not be for you. So, before you begin, you’ll need the following: You have already configured your router using DD-WRT and have the most recent release (as of this writing, v24-sp2), VPN version installed. The version number should be in the upper right corner of the web admin. If it says “std” or “vpn,” you’re in good shape. If it says “micro,” you probably don’t have the necessary tools. You possess some basic understanding of networking, and have the necessary settings to complete a VPN connection. If you’ve gotten as far as flashing with third-party firmware, you probably do. You understand that there is the possibility, albeit remote, that you could brick your router. I am not responsible for that, which is why I suggest you purchase an additional router to get all this set up on first before sacrificing your primary router. You’re not scared of the shell. You must sacrifice a goat to the networking Gods. For reference, my network uses 192.168.1.x for addresses. This can cause problems as it’s incredibly common for LANs. You may want to change your addresses to something less common. Not that big a deal for me, though. I also have mine set up in bridged, as opposed to routed, mode. I thing this is smarter (and easier), but if you’re curious, the difference is explained here. The first thing you need to do is install OpenVPN on your client machine. Even if you intend to use something different, you still need to install it so that you can generate all the certificates you’ll need. On a Mac, I find the best way to do this is with MacPorts. toruk:~ peckrob$ sudo port install openvpn2 It’ll crank for awhile compiling and installing what it needs, so go get a snack. Then, once you have it installed, head over to /opt/local/share/doc/openvpn2/easy-rsa/2.0/ and run the following commands: source ./vars ./clean-all ./build-ca ./build-key-server server ./build-key client1 ./build-dh At each stage, it will ask you questions. It is important to provide consistent answers or you will get errors. Importantly, don’t add passwords to your certificates. Once you are finished, you will find all your keys in the keys/ directory. Now, the fun part. Head over to the keys directory (/opt/local/share/doc/openvpn2/easy-rsa/2.0/keys). There should be a bunch of files in there. In a browser, open up your router’s web admin, and go to Services -> VPN. Under OpenVPN Daemon, next to “Start OpenVPN Daemon,” select “Enable” “Start Type,” set to “WAN Up” CA Cert. Go back to your shell and “cat ca.crt”. Past everything between the “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” including those two lines. You must include the BEGIN and END for this to work on each one! (This was a major trip-up for me). “Public Client Cert,” go back to shell and “cat server.crt”. Past everything between the “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” as above. “Private Client Key,” go back to shell and “cat server.key.” You need everything between “—–BEGIN RSA PRIVATE KEY—–” and “—–END RSA PRIVATE KEY—–” as above. “DH PEM,” go back to shell and “cat dh1024.pem”. You need everything between “—–BEGIN DH PARAMETERS—–” and “—–END DH PARAMETERS—–” as above. The important not above is to include the lines containing “—-whatever—-“. Not doing this cost me about 3 hours of messing around until I figured this out. With that all complete, it’s now time for your server config. Here is my server config: mode server proto tcp port 1194 dev tap0 server-bridge 192.168.1.1 255.255.255.0 192.168.1.201 192.168.1.210 # Gateway (VPN Server) Subnetmask Start-IP End-IP keepalive 10 120 daemon verb 6 client-to-client tls-server dh /tmp/openvpn/dh.pem ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem The important things here are “dev tap0”, which creates an ethernet bridge and not a tunnel (as “dev tun0” would do), and the “server-bridge” line. The documentation for that line is below it. The start IP and end IP specifies an IP range that VPN clients will receive addresses from. With all this complete, press “Save” and “Apply Settings” at the bottom of the screen. Wait patiently. Then, in the web admin, go to Administration -> Commands. If you already have a Startup script, edit it, otherwise, add this to the commands window: openvpn --mktun --dev tap0 brctl addif br0 tap0 ifconfig tap0 0.0.0.0 promisc up Press “Save Startup.” Then, if you already have rules in “Firewall,” edit those, otherwise add: iptables -I INPUT 2 -p tcp --dport 1194 -j ACCEPT Press “Save Firewall.” Now, reboot your router. When it comes back up, you should have a running OpenVPN server. To check, go to Administration -> Commands, and type this into the command window: ps | grep openvpn If you see something that looks like: 11456 root 2720 S openvpn --config /tmp/openvpn/openvpn.conf --route-up 17606 root 932 S grep openvpn Then it worked. Congratulations, you have a working OpenVPN instance. But how to connect to it? If you use Mac, you really have two choices: Tunnelblick or Viscosity. Tunnelblick is a little on the ugly side and difficult to configure, but is free and open source. Viscosity is reasonably pretty to look at and easier to configure, but is a commercial product. I chose Viscosity, so that’s what I’m demonstrating here. Once you have Viscosity downloaded and installed, go to Preferences and Connections, and add a connection. Enter a name and server address. Set the protocol to TCP and the device to tap. Now, before you continue, go back to your shell. Go back to the /opt/local/share/doc/openvpn2/easy-rsa/2.0/keys directory, and copy those keys someplace in your home (~) folder that you’ll be able to access. Back in Viscosity, go to the “Certificates” tab. You should see three lines labeled “CA,” “Cert,” and “Key.” For “CA,” select the “ca.crt” file you just moved. For “Cert,” select “client1.crt”. And, for “Key,” select “client1.key”. Under the “Options” tab, disabled LZO compression. For some reason this was causing a problem for me, so I just disabled it. Click “Save.” If all is right in the Universe and the goat you sacrificed to the Gods (you did do the goat sacrifice step, right?) was pleasing, you should now be able to connect back to your home network. Broadcast packets will work, and everything will be wonderful.
Read More
DD-WRT
To celebrate the re-launch of my “blog,” I’m going to do a multi-part entry about DD-WRT. But, first, a little history. For the first time in 10 years, I have no servers running in my house. At one point, I had three servers running in here doing various things. Then, I moved my public server offsite (it’s in the rack at the office now). That left two more Gentoo boxes running here in the house. Late last year I picked up a 1TB external hard drive, which I attached to my iMac and deactivated the file server. I will probably eventually replace this with a Drobo FS, but for now it’s fine. That just left a single Gentoo box that was running Asterisk and various network services. But I finally convinced my wife to let me drop the goofy VoIP line that I was paying $30 for and just add more minutes to her cellphone. With Asterisk out of the picture, the only thing left running on that box was network services. Well, a few weeks ago I ordered a TP-Link TL-WR1043ND router, intending to use it as a testbed for DD-WRT. Well, my experiments worked so well that I pulled my old router out and replaced it with the DD-WRT one. The faster processor also afforded a nice speed bump of about 7 Mb/s. With it handling all the services, I pulled out the final server and deactivated it. And my office is blissfully quiet now. DD-WRT is now handling all the minor network services (DHCP, NTP, etc). But what is it about DD-WRT that makes it so awesome - awesome enough to rip out some of my network infrastructure to make way for it? A few things that I will cover in this post. 1. DHCP static address assignments Believe it or not, the built-in firmware of the WRT-54G did not give you the ability to define a static address to be assigned by DHCP based on MAC address. This seems like a glaring oversight to me, but it was the reason I ran my own DHCP server rather than use the built-in ones. In DD-WRT (v24-sp2) you can go to the Services tab and set as many as you’d like. In my case, these are a couple of devices (like printers) that are addressed via IP address by the various machines, as well as my laptop and iMac. So that’s one nice thing, but it’s not nearly as cool as … 2. VPN Support The standard and VPN versions of DD-WRT support both PPTP and OpenVPN varieties of VPN … and I’m actually using both at the same time. My router is both a VPN server and VPN client as well. How? Why? Well, as to why, at dealnews, we run a PPTP-based VPN to allow us to work at home as needed. Once connected, we have access to our testing servers and all our development services. It’s like being directly connected to the work network, but I’m sitting at my iMac at home in my pajamas. I had been connecting directly from my Macs to the VPN for some time but, sitting at home the other day, I reflected on how silly it was that I was connecting two machines to the VPN and only when I needed them, rather than using DD-WRT to have a single tunnel up all the time that any computer on the home network could use if needed. Setting up a PPTP VPN Endpoint using DD-WRT So how did I set it up? Trial and error, as, frankly, the DD-WRT documentation is a bit lacking. So if you find yourself in my position of wanting to have a tunnel to your workplace VPN, hopefully this documentation will help you. I’m making a few assumptions before we begin: You have already configured your router using DD-WRT and have the most recent release (as of this writing, v24-sp2), VPN version installed. The version number should be in the upper right corner of the web admin. If it says “std” or “vpn,” you’re in good shape. If it says “micro,” you probably don’t have the necessary tools. You possess some basic understanding of networking, and have the necessary settings to complete a VPN connection. If you’ve gotten as far as flashing with third-party firmware, you probably do. You understand that there is the possibility, albeit remote, that you could brick your router. I am not responsible for that, which is why I suggest you purchase an additional router to get all this set up on first before sacrificing your primary router. With that out of the way, let’s begin! Log into your router’s DD-WRT web admin, and go to the Services -> VPN tab. Under PPTPD Client, click the radio button next to Enable. In the “Server IP or DNS Name” box, enter your VPN server. In the “Remote Subnet” box, enter the network address of the remote network. In my case, this was 10.1.2.0. In the “ Remote Subnet Mask” box, enter the remote subnet mask. In my case, this was 255.255.255.0. In the “MPPE Encryption” box, I have “mppe required,no40,no56,stateless”. This was required to get mine to work, but may not be necessary for you. Try first without it, then try with it if it won’t work. Leave the MTU and MRU values alone unless you know what you’re doing. Enable NAT. Username and password are self explanatory. WIth that done, press “Save” and “Apply Settings” at the bottom the page. With any luck, you should now have a VPN tunnel up to your remote host. To test it, go to Administration -> Commands, and in the command box, enter the following: ping -c 1 <some remote address on VPN> If you get a response back that looks like: PING <remote service IP> (<remote service IP>): 56 data bytes 64 bytes from <remote service IP>: seq=0 ttl=64 time=281.288 ms --- <remote service IP> ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 281.288/281.288/281.288 ms Then it’s up and working. Now, try from your computer… Probably didn’t work, did it? This is because your router’s firewall doesn’t yet know about the remote network or to route packets to it appropriately. For some reason, the current version of DD-WRT does not add the appropriate configuration to the firewall automatically when the PPTP tunnel is established. So, we have to do it manually. Go to Administration -> Commands, and enter the following: iptables -I OUTPUT 1 --source 0.0.0.0/0.0.0.0 --destination <remote network address>/16 --jump ACCEPT --out-interface ppp0 iptables -I INPUT 1 --source <remote network address>/16 --destination 0.0.0.0/0.0.0.0 --jump ACCEPT --in-interface ppp0 iptables -I FORWARD 1 --source 0.0.0.0/0.0.0.0 --destination <remote network address>/16 --jump ACCEPT --out-interface ppp0 iptables -I FORWARD 1 --source <remote network address>/16 --destination 0.0.0.0/0.0.0.0 --jump ACCEPT iptables --table nat --append POSTROUTING --out-interface ppp0 --jump MASQUERADE iptables --append FORWARD --protocol tcp --tcp-flags SYN,RST SYN --jump TCPMSS --clamp-mss-to-pmtu At the bottom, press “Run Commands” and wait. It shouldn’t take long, and should produce no output. Then, enter that command again, and press “Save Firewall” at the bottom. Give your router a few seconds to restart the appropriate services, then try again from your computer. Your machine, and all machines on your network, should now be able to access the VPN. In this configuration, only traffic matching the remote network will pass over the VPN - the rest of your traffic will be routed to the Internet in normal fashion. Now, in my next entry, I’ll tell you why I’m not using PPTP to connect myself back to my home network when I’m on the road.
Read More
News
Welcome to the new home for the Code Lemur blog … rebeccapeck.org! I’ve sat on this domain for six years - I don’t know why it took me so long to port my blog from wordpress.com over to here. Nonetheless, it is done now. And hopefully I’ll find time to update it more with musings about my life and adventures writing code in dot-com.
Read More
Conferences
I’ll be attending MySQL Conference in Santa Clara, California this year. This will actually be my first time attending this conference, so I’m looking forward to it. Also, my coworker Brian Moon will be speaking at the conference, “What is memcached and What Does It Do,” so pop in and see him as well!
Read More
Randomness
Newsweek, in 1995, published an article by Clifford Stoll titled “Hype alert: Why cyberspace isn’t, and will never be, nirvana.” Well, now it’s 15 years later. A relative blink of an eye. Hell, I can remember what I was doing back in 1995 - a kid playing with this newfangled thing called “the Internet,” that very few people understood but some visionaries had the foresight to realize was going to completely change the world. Let’s see some of the areas where Stoll got it absolutely wrong: “The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.” Pretty much every newspaper has some online presence, from the largest New York Times publisher to the smallest hometown O-A News. Every instrument of government is now connected to the Internet, and contacting my representatives is online, making it easier than ever for them to ignore me. He is correct that no CD-ROM will ever replace a teacher. Although we don’t use CD-ROMs anymore. But while all this technology is great, instruction will continue to be the domain of humans for the foreseeable future. However, technology certainly makes instruction easier and more fun. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure. Amazon.com. Barnes and Noble.com. Kindle. Nook. iPad. Buy wirelessly over the air anywhere I am. Then there’s cyberbusiness. We’re promised instant catalog shopping–just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obsolete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month? Even if there were a trustworthy way to send money over the Internet–which there isn’t–the network is missing a most essential ingredient of capitalism: salespeople. Yup. All that has happened. Moreover, I’ve done almost all that in just the last month! I buy all the time online. I haven’t bought an airline ticket any other way than online in years. Last weekend, when we went out to Melting Pot, I made our reservation online. And while stores are not yet obsolete, there are certain times of the year - Christmas - I won’t go anywhere near a brick and mortar establishment. The crowds are terrible. But why should I, when I can do it all online and have it delivered to my door? And the best part? I don’t have to deal with pushy salespeople! I’m not a moron - I know what I want and I can use the gasp Internet to research! Computers and networks isolate us from one another. A network chat line is a limp substitute for meeting friends over coffee. I hear this one all the time, for years. I have one word: Facebook. Right now, thanks to the Internet, I am more connected to the lives of those around me than at any point in my life. And while he is correct that it isn’t a substitute for human contact, my social circle is now larger than at any other time ever. It makes it easier to arrange that human contact Granted, we have luxury of 20/20 hindsight, but when someone talks about something “won’t” happen in the future, you should always think of this. Just because it wasn’t there in February of 1995 doesn’t mean that engineers wouldn’t solve the problems and get there. The surprising thing is that it happened so fast! Moreover, if the innovators of the 90s had listened to luddites like Stoll (and lest you think this piece is ironic, he wrote a book that, no shit irony, is available at Amazon.com) we might not have had the complete information revolution that we’re still living through. So never let anyone tell you you can’t do something. Stick with it, and look forward to seeing egg on their face in 15 years.
Read More
Apple
When you work across multiple devices and multiple computers on a daily basis, keeping the information you expect to be there the same across all of them used to be a monstrous pain. This is where synchronization comes in. I have 3 “computers” I use every day: my iMac, my Macbook Pro, and my iPhone. On each of those computers, I have several programs that may need to access the same type of data. Bookmarks are synchronized using Xmarks. This allows me to sync them across Safari, Google Chrome and Firefox. And because the bookmarks are sync’d to Safari via a background process, I can use Mobileme to sync them to my iPhone. All this happens in the background, without me having to think about it. I just add a bookmark somewhere, and minutes later it’s reflected everywhere else. Email rules, accounts and signatures are synchronized via Mobileme and appear on all my computers and my iPhone. Contacts are sync’d via Mobileme and appear everywhere. Same with calendars, except calendars is the real win. I can make an calendar entry on my iPhone, and it’s instantly sync’d to my calendars on my laptop and desktop. I have some files and programs that I need access to, I sync those with Mobileme across all my devices via iDisk. I can access those everywhere, even on my iPhone. I even created a directory in there called “Scripts;” with a change to my bash path on my Macs, any scripts I write are sync’d too. And all this stuff happens more or less instantly and completely transparently to me. Via the Internet and over the air for the iPhone. I don’t even have to plug anything in. It just happens. I can’t believe computers ever worked any other way, and there is no way I can do without it now. Xmarks is free. Mobileme is $99 a year, but totally worth it simply in the headache I save in not having to deal with disparate data spread over 3 devices.
Read More
Apache
In working on a side project with a few friendly developers, we decided to set up a Subversion repository and a Trac bug and issue tracker. Both of these, in normal setups, rely on HTTP authentication. So, being that we already had an authentication database as part of the project, my natural first thought was to find a way to authenticate Trac and Subversion of these against our existing MySQL authentication database rather than to rely on Apache passwd files that would have to be updated separately. Surprisingly, this was more difficult than it sounded. My first thought was to try mod_auth_mysql. However, from the front page, it looks as if this project has not been updated since 2005 and is likely not being actively maintained. Nonetheless, I gave it a shot and, surprisingly, got it mostly working against Apache 2.2.14. Notice I said “mostly.” It would authenticate about 50% of the time, while filling the Apache error logs with fun things like: [Sat Feb 13 11:11:27 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 [Sat Feb 13 11:11:28 2010] [notice] child pid 19074 exit signal Segmentation fault (11) [Sat Feb 13 11:34:14 2010] [error] [client -.-.-.-] MySQL ERROR: Lost connection to MySQL server during query: [Sat Feb 13 11:34:15 2010] [error] [client -.-.-.-] MySQL ERROR: MySQL server has gone away:` Rather than tear into this and try to figure out why a 5-year-old auth module isn’t working against far newer code, and with very little to actually go on, I just concluded that it wasn’t compatible and looked for a different solution. That’s when I came across mod_authnz_external. If your’e not familiar with this module, what it allows you to do is auth against a program or script running on your system, therefore allowing you to auth against anything you want - a script talking to a database, PAM system logins, LDAP, pretty much anything you have access to. All you have to do is write the glue code. In pipe mode, mod_authnz_external uses pwauth format, where it passes the username and password to stdin, each separated with a newline. It uses exit codes to return back to Apache whether or not the login was valid. Knowing that, it’s pretty easy to write a little script to intercept the username/password, run a query, and return the login. #!/usr/bin/php <?php` include "secure_prepend.php"; include "database.php"; $fp=fopen("php://stdin","r"); $username = stream_get_line($fp,1024,"\n"); $password = stream_get_line($fp,1024,"\n"); $sql = "select user_id from users where username='%s' and password='%s' and disabled=0"; $sql = sprintf($sql, $db->escape_string($username), $db->escape_string($password)); $user = $db->get_row($sql); if(!empty($user)) { exit(0); } exit(1); ?> Then, you just hook this into your Apache config for Trac or Subversion: AddExternalAuth auth /path/to/authenticator/script SetExternalAuthMethod auth pipe <Location /> DAV svn SVNPath /path/to/svn AuthName "SVN" AuthType Basic AuthBasicProvider external AuthExternal auth require valid-user </Location> Restart, and it should be all working. Some may argue that the true “right” way to do this is LDAP. But with just three of us, LDAP is overkill, especially when we already have the rest of the database stuf in place. The big advantage to this, even over mod_auth_mysql, is the amount of processing you can do on login. You basically can run any number of queries in your authenticator script - rather than just one. You can update with last login or last commit date, for instance. Or you can join tables for group checking; say you want someone to have access to Trac, but not Subversion. You can do that with this.
Read More