I hate RPM packaging.

One of my side duties at work is preparing a disk that (ab)uses our upgrade process to flash the bios on the raid controllers every time IBM releases yet another attempt to make it work right. In the past, what I’ve had to do is make a RPM that installs a couple of “.ufi” files that the %post part of the RPM then uses the arcconf program that’s already on the system to flash the ROMs with. That was fine and dandy, until IBM decided that instead of distributing a ufi file, they’d distribute an ELF binary that included within it the ufi file, the arcconf program, and some other cruft like a script to run it and instructions. Fine, I thought, rather than bothering to unravel all the crap, I’d just package the executable into the RPM and run it in the %post. And it didn’t work. After much sweating and swearing, I finally got it working.

Did you know that the rpmbuild process automatically does a “strip” on any ELF binaries it finds? I didn’t. Did you know that IBM’s packaging of a binary file inside an ELF binary doesn’t work if you strip the file? I didn’t. Did you know that the command to tell rpmbuild to NOT strip the file is almost completely undocumented and obscure to the point of pointlessness? I didn’t. Did you know that the best reference for building RPMs, “Maximum RPM” is no longer available on rpm.org, and the replacement doesn’t have an index or a search function? I didn’t.

Am I annoyed? You bet.

For the benefit of future searchers, here’s how you keep RPM from stripping your binaries. You put the following in your rpm spec file, preferably near the top:

# This stops RPM from stripping the .bin files, which breaks them.
%define __os_install_post \
/usr/lib/rpm/redhat/brp-compress \
/usr/lib/rpm/brp-python-bytecompile \
/usr/lib/rpm/redhat/brp-java-repack-jars \
%{nil}

Intuitive, eh?

Busy day (Busy? I just spent 4 hours burying the cat!)

Let’s see, today I

  • Fixed a bug that I’ve been working on for over a week (which I would have fixed in a day if the China team hadn’t put in a kluge to hide the most visible symptom). Oh, and the root cause was a module that the China team had written violating a basic assumption of my pre-existing gui code.
  • Had a job interview at Paychex which went pretty well, but included a strange little math test at the end which was fun but I’m not sure how relevant it is.
  • Went for a paddle – I meant to make six miles, but I only managed four because my shoulder is bugging me.
  • Got a call from the sleep clinic at Sleep Insights “reminding” me that I had a consult appointment at 11:20 am tomorrow, which is kind of strange because I had a sleep study at a completely different sleep clinic tomorrow evening.

I thought about writing more on each of those things, but I figured my blog is boring enough without the help. So if you really need more details, comment and I’ll inflict more detail on you.

Drinking from the firehose

StackOverflow is now in open beta, so anybody can sign up and participate. And evidently, anybody has. The quality of the questions has gone way down, and the quantity has gone way up. It used to be that I’d stop back every 15-30 minutes and hit refresh, and there’d be a few new questions, but a couple I’d already seen would still be on the “Newest Questions” page. Now when I do that, not only have all the questions I’ve already seen been shoved off of page 1, sometimes they aren’t even on page 2.

And a lot of the questioners are obviously not looking as they’re typing their subject line, because one of the really nifty features of StackOverflow is that as you type your subject line, it picks out keywords and shows you other questions with the same keywords. If you pay attention, often times you’ll find your question has already been asked and answered. So seeing questions that you know were already answered before is a prime indicator that people aren’t paying attention to that feature.

In some ways, it reminds me of September in the good old days of Usenet. Hopefully it will calm down a bit after a while.

Actually, that reminds me of something – on day 1 of the open beta, somebody asked “So how is StackOverflow just not a re-implementation of Usenet groups”, which quickly got deleted as off-topic or moderated down so far that I couldn’t see it any more. (Which pretty much answers that question, doesn’t it?) I have some thoughts about that, but I should probably leave that for another post.

Nav data update

I’m testing a new update script for my navaid.com waypoint database. The old update scripts were written for when I was running on MySQL, and I’ve switched to PostGIS to support the new iPhone version of the CoPilot flight planning program. One of the salient features of the new iPhone version is that it attempts to be smart about downloading waypoints as you need them. One of the ways it does that is by asking my server for all the points in a particular area that have changed since a given date. The app keeps track of all the “areas” it has seen, and when the last time it was updated, and asks for an update of those areas at certain intervals. But that means I have to keep track of when a point was last updated. It also means that I need to keep track of what “area” a point is in. For the areas, I use a pseudo-quadtree where I allow only 500 points in an “area”, and when it gets more than that I split the cell into four sub-cells and mark the original cell as “superseded”. The new sub-cells have a “supersedes” value, so if the app asks for an area X, and area X has been split, I can say “X has been superseded, and here are the area ids A, B, C, and D that supersede it.”

But all this means that my new update scripts have to get the new data for a waypoint, figure out which old waypoint it was equivalent to (even if the waypoint has been resurveyed and is at a slightly different location and/or it has changed id), and only save the point anew if something significant has changed. Oh, and if the new data is missing information that the old data has, try to be smart about keeping the old data – for instance, George Plews’ Airports In Canada web site has data for airports in Canada that I can’t get any other way, but it’s also got data for airports that either were in the DAFIF data or are in the FAA data, and those two data sources often have much more information about runways and communications frequencies that Plews doesn’t have. So I want his latest data, but I don’t want to lose the other stuff that he doesn’t.

One of the things I do to match up the old with the new data is look with a bit of geographic “slop” – in the case where the ID matches I look within 0.05 degrees latitude and 0.05 degrees longitude (which believe me, in Alaska is way too big an area), and if the IDs don’t match, I look within 0.025 degrees longitude and 0.025 degrees longitude. These numbers were chosen extremely arbitrarily, and still causes a bit of a problem with a couple of airports that are near the US/Canada border because when I’m loading the FAA data it changes some Canadian airport to the nearby US airport, and then when I load the Plews data it changes it back.

Testing out my load scripts, I discovered two things:

  1. Sometimes the resurveyed point has moved enough that it’s in a different “area”. And that’s going to confuse the hell out of the algorithm that the app uses for getting updates, because it will ask for updates for the old area, and not get anything for that point. That’s going to require some thought to fix.
  2. In the next FAA data load, they’ve actually moved a couple of airports by 1.0 degrees of latitude or longitude. And judging by what I’m seeing on the Our Airports site maps, it appear the new values are correct, so the old ones must have been a data entry error. In this case, my “match the old” algorithm didn’t find anything to match within its radius of action, so it made a new point and marked the old one as deleted. The app should deal with that nicely.

Hmmm. Need to think how to handle this…

Stackoverflow comes through again

There is a major function in our program called the Migrator. It’s purpose is to “migrate” content from the content storage to the various feature players, either manually under user control or automatically based on what is on the schedule. Unfortunately the original requirements, put in place for a potential customer who ended up going a different way, were that the user had to be able to see and control each individual file in the migration job, and to each destination. This meant there is a tree showing each destination, and all the top level containers (playlists), all the middle level containers below that (CPLs) and then all the individual files (track files, fonts, projector control files, etc) below that, all wrapped up in a nice little collapsable tree. (Thanks to Sun providing an example implementation of JTreeTable on their web site.)
Continue reading “Stackoverflow comes through again”