More geo coding

I got the airport data nailed down, at least all the stuff I need for iPhone CoPilot (which unlike the other databases I provide doesn’t care about communications frequencies or runways). And now I’m looking at “waypoints”, the points in space, sometimes defined by the intersection of a specific radial or bearing from this navigation aid and a specific radial or bearing from that navigation aid, sometimes a distance and radial from one navigation aid, or in the case of GPS instrument approaches and air routes, just points in space.

The difficulty with waypoints is that their definition in the file doesn’t have any sort of location information other than latitude and longitude, which means I have to hit the geonames server for every one (and so far I’ve gone over my hourly limit with them multiple times while testing this code), and that sometimes they, unlike airports, can be out in the middle of the ocean somewhere. So the geonames “countrySubdivision” service just says “I have no idea what country this is in”.

Unfortunately, my code doesn’t like it when a point isn’t in a country. I need to assign every point a 2 letter country code (I use the FIPS 10.4 code instead of ISO-3166-1 because my first world data came from DAFIF, which used FIPS 10.4 and I stuck with it. I’d probably switch to ISO-3166-1 except I have no idea how to do it painlessly.)

In my program to load FAA data, I do some messing around trying to map the country names they use to FIPS 10.4, and sometimes I’ve done some things I’m not proud of, like mapping “French West Indies” to “GP” (the code for Guadeloupe, which is just one of the four territories that make up the French West Indies) or “Trust Territories” to “JQ” (the code for Johnston Atoll) – that one is really dodgy because the “Trust Territories” were broken down into the Republic of the Marshall Islands (“RM”), the Federated States of Micronesia (“FM”), The Commonwealth of the Northern Mariana Islands (“CQ”) and the Republic of Palau (“PS”). Actually if I looked through the FAA data these days, I’d probably find they never use the name “Trust Territories” any more. Another one that comes up is the United States Minor Outlying Islands, which has an ISO-3166-1 code “UM”, but which consists of 9 separate “insular areas” that have their own FIPS 10.4 codes.

So my thought was to ask the geonames “ocean” service what body of water these points is, and then make up a phoney country code for each ocean. Unfortunately there aren’t just a few oceans, there are are dozens of them – everything from the Arabian Sea to the South Pacific Ocean. So many that I can’t come up with semi-mnemonic identifiers for them. So using the fact that FIPS 10.4 codes never start with O or X, I just went though and assigned anything with “Ocean” in the name a code starting with “O” and anything else a code starting with “X”. It sucks, but it will work. Sort of. I hope.

The long term solution is that separate the code I use for iPhone CoPilot further away from the other navaid.com code, and not require a non-null country code in iPhone CoPilot. Also try to migrate to ISO-3166-1 country codes.

More geocoding nonsense

I had to go back to using geonames.org because of the problems I’d already told you about with Google’s geocoder. But geonames.org has a very strange bug. I’d experimented, and found that sometimes it didn’t return anything, especially for something like a point just off-shore of a small island nation. You’re supposed to be able to feed it a “radius” so it can apply some slop, and sure enough, applying a radius of 25 or so made sure that those points were getting a result. But that’s when I discovered that it was returning the wrong result for places like Pelee Island, which as I’m sure you’re all aware is a tiny little island in Lake Ontario that’s part of Ontario, but is actually closer to Ohio. If you asked geonames for the country and subdivision with no radius, it would return Ontario, CA. But if you gave it a radius of 25, it would return Ohio, US. So I’ve got a dilemma – choose too small a radius, and it won’t find anything for some points, but choose too big a radius, and for some points it will return entirely the wrong thing.

So this is what I’m stuck with – I ask geonames for the country and subdivision with a radius of 1. If it doesn’t find anything, it multiplies the radius by 5, sleeps for 250 milliseconds (to be nice to the geonames server) and tries again. So far that finds a result with a radius of 1 749 times, a radius of 5 9 times, and a radius of 25 3 times. It’s not a good thing – obviously it would be better if geonames returned the right thing the first time, but I’ve done a number of spot checks and it seems to be working.

Geocoding is hard…

One of the problems I’m having with this data load is that instead of telling you what country each waypoint is in, they tell you the “responsible authority”. Ok, normally that’s not too hard to map to a country, and sometimes there are multiple authorities for a country, (and the Czech Republic is super annoying because they designate every little flying club or airport owner as a “responsible authority”). That I can take care of with a simple lookup table – 305 entries, 90 of them in the Czech Republic. The problem occurs because sometimes the “responsible authority” covers multiple countries. “Serbia/Montenegro” in the Balkans, “Comoros/Madagascar/Reunion” in the Indian Ocean, Aruba/Netherlands Antilles” in the Caribbean, “Kiribati/Tuvala”, “Kiribati/Line Islands”, “American Samoa/Western Samoa” in the Pacific. (Although didn’t I read somewhere that the Netherlands Antilles recently split up into a bunch of separate countries?) Anyway, I want to disambiguate these and determine which country points in these merged authorities are in.

First I thought I’d look for the closest point in my existing database. Turns out, some of the new points are near borders so I end up getting the wrong country. Aha, I thought, I’ll use “Reverse Geocoding”. A while back I used a service at geonames.org to reverse geocode some points to determine which Canadian province they were in. I tried it, and the service is really slow to respond. So I thought I’d try Google’s new reverse geocoding. That’s when I discovered a couple of flies in my oatmeal:

  1. There are locations in the world where Google returns no results, in one case I saw because the point is slightly off shore according to Google Maps (although if you switch to satellite view you can see the point is actually on land). In another case, the result is puzzling – yes, it’s in Kosovo so maybe it’s disputed territory, but it’s not too far from the village of Lluge which Google does recognize.
  2. Addresses in Kosovo show up in the “formatted_address” field as “Lluge, Kosovo”, but the country code that is returned is Serbia. The data I’ve used before comes from the US government, and since the US government officially recognizes Kosovo, it would be inconsistent to label the new stuff as from Serbia instead of Kosovo

Oh, and geonames.org? It eventually seems to do the right thing for both of the above cases, although the country code it returns for Kosovo is “XK” (it appears that there isn’t an official ISO country code for Kosovo – I’d previously seen “KS”. I guess I’ll have to experiment more.

I may need to rethink this…

I am currently working on a new data source for the waypoint generator. Unfortunately because of the way it’s licensed, it’s only going to be for the iPhone version of CoPilot, and I can’t make it available for GPX and other users. Now all of my data loaders have, up until now, been written in Perl, and I have a really good Perl module that performs many of the loading tasks, such as merging existing data with new data.

The new data comes in the form of a gigantic XML file with a kind of weird schema. The provider actually provides both the gigantic file, and also a smaller set of updates on the 28 day cycle favoured by the ICAO, so hopefully I’ll only have to parse the gigantic file once, and then process the updates. I installed XML::SAX and Expat, and coded up a preliminary decoder to extract some (but not all) of the information that I need, just to make sure I was doing it right. I ran it with a subset of the data, and it seemed to be doing ok, and then just for grins while I was working on improving the code, I fired it off on the whole file. That was 3 days (72 hours) ago. It’s still running. Unfortunately I didn’t put in any progress messages so I don’t know where it is in file, only that it’s past the airport section that I care about. I profiled the subset data, and verified that Perl is spending most of its time in Perl code, not in native code – some of it mine, some of it XML::SAX, and some of it in Moose.

So here’s the conundrum: Do I spend the time to re-write this loader code in another language and hope it’s faster? Or do I accept the fact that this is going to take forever, but hopefully I’ll only have to do it once and then the updates will be small enough that I can do them in perl? Because re-writing in another language means re-writing all the data merging and validation logic code, and could be a potentially huge project. And I won’t know until it’s all working whether it’s going to be faster.

Update: I profiled the perl program with a semi-large dataset. Here’s the results:

dprofpp
Total Elapsed Time = 56.86461 Seconds
User+System Time = 46.10461 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
20.5 9.494 23.288 397862 0.0000 0.0001 XML::SAX::Expat::_handle_start
15.4 7.136 12.820 131698 0.0000 0.0000 XML::SAX::Expat::_handle_char
14.7 6.787 55.922 1 6.7867 55.921 XML::Parser::Expat::ParseStream
13.6 6.311 12.977 397862 0.0000 0.0000 XML::SAX::Expat::_handle_end
7.07 3.258 3.258 472462 0.0000 0.0000 XML::NamespaceSupport::_get_ns_det
ails
6.79 3.132 3.132 397862 0.0000 0.0000 XML::NamespaceSupport::push_contex
t
6.48 2.986 5.685 131698 0.0000 0.0000 XML::SAX::Base::characters
4.24 1.953 1.953 131698 0.0000 0.0000 EADHandler::characters
3.87 1.786 4.411 397862 0.0000 0.0000 EADHandler::start_element
3.78 1.744 12.308 211270 0.0000 0.0000 XML::SAX::Base::__ANON__
3.69 1.702 1.838 4000 0.0004 0.0005 Data::Dumper::Dumpxs
2.55 1.174 5.870 397862 0.0000 0.0000 XML::SAX::Base::start_element
2.44 1.124 3.956 397862 0.0000 0.0000 XML::NamespaceSupport::process_ele
ment_name
1.93 0.892 0.892 397862 0.0000 0.0000 XML::NamespaceSupport::pop_context
1.85 0.854 5.768 397862 0.0000 0.0000 XML::SAX::Base::end_element

Note how it’s dominated by XML::SAX::Expat.

We’re back, baby!

With new hardware donated by a very generous friend, I’m back up and running again. Hopefully I’ll have time to post some of the millions of things that have happened in the couple of weeks I’ve been down, but for now I’ll say that the old “new” server died with a million errors that looked SATA related, the disks checked out fine, and they’ve now been placed in new hardware. Oh, and you never know what you’ve been leaving out of your backups until *after* you type “mkfs.ext3 -j -c -c /dev/xen-space/xen1-disk”