Geocoding is hard…

One of the problems I’m having with this data load is that instead of telling you what country each waypoint is in, they tell you the “responsible authority”. Ok, normally that’s not too hard to map to a country, and sometimes there are multiple authorities for a country, (and the Czech Republic is super annoying because they designate every little flying club or airport owner as a “responsible authority”). That I can take care of with a simple lookup table – 305 entries, 90 of them in the Czech Republic. The problem occurs because sometimes the “responsible authority” covers multiple countries. “Serbia/Montenegro” in the Balkans, “Comoros/Madagascar/Reunion” in the Indian Ocean, Aruba/Netherlands Antilles” in the Caribbean, “Kiribati/Tuvala”, “Kiribati/Line Islands”, “American Samoa/Western Samoa” in the Pacific. (Although didn’t I read somewhere that the Netherlands Antilles recently split up into a bunch of separate countries?) Anyway, I want to disambiguate these and determine which country points in these merged authorities are in.

First I thought I’d look for the closest point in my existing database. Turns out, some of the new points are near borders so I end up getting the wrong country. Aha, I thought, I’ll use “Reverse Geocoding”. A while back I used a service at geonames.org to reverse geocode some points to determine which Canadian province they were in. I tried it, and the service is really slow to respond. So I thought I’d try Google’s new reverse geocoding. That’s when I discovered a couple of flies in my oatmeal:

  1. There are locations in the world where Google returns no results, in one case I saw because the point is slightly off shore according to Google Maps (although if you switch to satellite view you can see the point is actually on land). In another case, the result is puzzling – yes, it’s in Kosovo so maybe it’s disputed territory, but it’s not too far from the village of Lluge which Google does recognize.
  2. Addresses in Kosovo show up in the “formatted_address” field as “Lluge, Kosovo”, but the country code that is returned is Serbia. The data I’ve used before comes from the US government, and since the US government officially recognizes Kosovo, it would be inconsistent to label the new stuff as from Serbia instead of Kosovo

Oh, and geonames.org? It eventually seems to do the right thing for both of the above cases, although the country code it returns for Kosovo is “XK” (it appears that there isn’t an official ISO country code for Kosovo – I’d previously seen “KS”. I guess I’ll have to experiment more.

I may need to rethink this…

I am currently working on a new data source for the waypoint generator. Unfortunately because of the way it’s licensed, it’s only going to be for the iPhone version of CoPilot, and I can’t make it available for GPX and other users. Now all of my data loaders have, up until now, been written in Perl, and I have a really good Perl module that performs many of the loading tasks, such as merging existing data with new data.

The new data comes in the form of a gigantic XML file with a kind of weird schema. The provider actually provides both the gigantic file, and also a smaller set of updates on the 28 day cycle favoured by the ICAO, so hopefully I’ll only have to parse the gigantic file once, and then process the updates. I installed XML::SAX and Expat, and coded up a preliminary decoder to extract some (but not all) of the information that I need, just to make sure I was doing it right. I ran it with a subset of the data, and it seemed to be doing ok, and then just for grins while I was working on improving the code, I fired it off on the whole file. That was 3 days (72 hours) ago. It’s still running. Unfortunately I didn’t put in any progress messages so I don’t know where it is in file, only that it’s past the airport section that I care about. I profiled the subset data, and verified that Perl is spending most of its time in Perl code, not in native code – some of it mine, some of it XML::SAX, and some of it in Moose.

So here’s the conundrum: Do I spend the time to re-write this loader code in another language and hope it’s faster? Or do I accept the fact that this is going to take forever, but hopefully I’ll only have to do it once and then the updates will be small enough that I can do them in perl? Because re-writing in another language means re-writing all the data merging and validation logic code, and could be a potentially huge project. And I won’t know until it’s all working whether it’s going to be faster.

Update: I profiled the perl program with a semi-large dataset. Here’s the results:

dprofpp
Total Elapsed Time = 56.86461 Seconds
User+System Time = 46.10461 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
20.5 9.494 23.288 397862 0.0000 0.0001 XML::SAX::Expat::_handle_start
15.4 7.136 12.820 131698 0.0000 0.0000 XML::SAX::Expat::_handle_char
14.7 6.787 55.922 1 6.7867 55.921 XML::Parser::Expat::ParseStream
13.6 6.311 12.977 397862 0.0000 0.0000 XML::SAX::Expat::_handle_end
7.07 3.258 3.258 472462 0.0000 0.0000 XML::NamespaceSupport::_get_ns_det
ails
6.79 3.132 3.132 397862 0.0000 0.0000 XML::NamespaceSupport::push_contex
t
6.48 2.986 5.685 131698 0.0000 0.0000 XML::SAX::Base::characters
4.24 1.953 1.953 131698 0.0000 0.0000 EADHandler::characters
3.87 1.786 4.411 397862 0.0000 0.0000 EADHandler::start_element
3.78 1.744 12.308 211270 0.0000 0.0000 XML::SAX::Base::__ANON__
3.69 1.702 1.838 4000 0.0004 0.0005 Data::Dumper::Dumpxs
2.55 1.174 5.870 397862 0.0000 0.0000 XML::SAX::Base::start_element
2.44 1.124 3.956 397862 0.0000 0.0000 XML::NamespaceSupport::process_ele
ment_name
1.93 0.892 0.892 397862 0.0000 0.0000 XML::NamespaceSupport::pop_context
1.85 0.854 5.768 397862 0.0000 0.0000 XML::SAX::Base::end_element

Note how it’s dominated by XML::SAX::Expat.

We’re back, baby!

With new hardware donated by a very generous friend, I’m back up and running again. Hopefully I’ll have time to post some of the millions of things that have happened in the couple of weeks I’ve been down, but for now I’ll say that the old “new” server died with a million errors that looked SATA related, the disks checked out fine, and they’ve now been placed in new hardware. Oh, and you never know what you’ve been leaving out of your backups until *after* you type “mkfs.ext3 -j -c -c /dev/xen-space/xen1-disk”

So what exactly did you verify?

I spent nearly an hour on the phone with Time Fucking Warner (yes, that’s the correct name for them) trying to get them to set up an install of a CableCard for my new TiVo. They have to come out to do it, although last time the guy came out, plugged it into the TiVo, then called into the office, read them a couple of numbers, verified that it was set up on their side, and then left. Yeah, I couldn’t do that.

Anyway, getting them to set up the appointment involved 3 long times on hold, and getting transferred twice. The third guy once again asked me for my phone number, and then asked me for my full name, email address and billing address “for verification”, just like the other two people had done (evidently they can transfer a call, but can’t transfer the information they’ve collected on you). And yet, after talking to him for some time he couldn’t seem to understand that we already had a CableCard in the other TiVo, and some other details on what he was telling me about my account didn’t match up. Finally he calls me Howard, and when I said I wasn’t Howard, asks me what my phone number is. And then he says “oh, I had your number in as [number which is a simple transposition of mine]”.

Which left me scratching my head about all the other verification of name, home phone and mailing address. Had he even listened to what I’d said? What is the point of those verifications if they could end up sending out a technician to the wrong address because they got the phone number entered wrong? And they have the nerve to try to get you to dump the phone company and switch to them?

Same shit, different job…

I’ve written a few times (here and here) about how every time you change something, every bug anywhere near that area now becomes your fault.

In my current job, I was in charge of a system called “Entitlements” that controlled who could do what and could access what parts of the system. Which means that dozens of new defects come to me with a note from the business analyst or equivalent person saying “looks like an Entitlement issue”. And I have to look at it and say “no, the reason they can’t access that part of the site isn’t because of Entitlements, it’s because NOBODY WROTE THAT PART OF THE SITE YET”.

Side note: we’re using “Agile Development”, which is a short form way of saying “we don’t know what the fuck we’re doing from day to day, and we’re not sure what has been done and what hasn’t until somebody complains that it’s not done”.

The good part is that because we’re Agile, that means when I discover that the problem is that nobody wrote that part of the site yet, I get to write it. So yay me.