Hmmm

I upgraded my blog to WordPress 2.5 because the damn thing was nagging me all the time about being back at 2.3.3. But now I discover that my theme doesn’t work right with the new code, and one of my favourite plugins, the LiveJournal CrossPoster, doesn’t work. Now I’ve either got to find a less ugly theme, or fix the old Maple theme to support the new comment code with the built-in Gravatars.

Update: I might have found the fix for LJXP!
In lj_crosspost.php, change

if(version_compare($wp_version, “2.1”, “< ")) { require_once(ABSPATH . '/wp-includes/template-functions-links.php'); }

to

if(version_compare($wp_version, “2.3”, “< ")) { require_once(ABSPATH . '/wp-includes/link-template.php'); } else if(version_compare($wp_version, "2.1", "<")) { require_once(ABSPATH . '/wp-includes/template-functions-links.php'); }

Update #2:
I officially hate this update. It keeps adding bogus </code> tags even though my tags are perfectly well closed before I saved them. Let’s try with block quotes instead?

And it just gets worse

On March 25th, I ranted about a developer who had checked in a bunch of stuff that required a special “upgrade DVD” without telling the other developer that this would be required in Rants and Revelations » Developer dumbassedness. Well, it turns out it’s worse than that. Far worse. Not only didn’t he not tell us about the magic DVD, but he hadn’t even tested the damn DVD. It’s now 2 weeks later, and his DVD *still* doesn’t work.

So now I’m caught in a vice – I had to “rebase” my development environment to the new build in order to deliver some bug fixes, and now my development environment and my test system are at different builds, which makes it hard to test things, and especially hard to use the remote debugger in Eclipse. And I can’t get them back in sync until this damn DVD is ready.

I’d also like to mention that I did a much more ambitious upgrade DVD a few years ago, where his DVD is upgrading from CentOS 5.0 to CentOS 5.1 and reformatting the content partition from ext3 to xfs, mine was upgrading from RedHat 7.3 to CentOS 3.3. And I didn’t leave the rest of the developers out to dry because I tested the hell out of it on my test system, reformatting it back to RedHat 7.3 and running versions of my upgrade script over and over for weeks before I put it into the development stream.

Oh, that’s not good

I got an email from one of the sysadmins at NCF saying that the news directory has run out of space. After poking around a bit, I’ve discovered that:

  • cron jobs, including the nightly expire job, haven’t run since March 18th
  • I haven’t been receiving emails sent to the NCF news account, possibly for even longer than that, which is why I didn’t notice when the system throttled 3 days ago. Normally newswatcher sends these emails which I have forwarded to SMS so I don’t miss them.

The sysadmin wonders if the cron jobs not running has anything to do with the DST change. The machine is ancient, and running an ancient version of Solaris.

Of course, the fact that I didn’t notice the lack of the daily news admin email in my morning scan-and-delete folder isn’t good, either.

This could work

I probably shouldn’t give too many details, but I’ve been in talks with a certain freeware developer over developing a flight planning application for a web connected hand-held device. (Anybody who knows anything about me can probably guess the developer and the device.)

My part would be a server app that would respond to requests for data from the device and send new data or updates. Nothing too different than what I’ve been doing before, but one of the things we’ve been talking about is managing “areas”. His concept was that if the user entered an id that wasn’t on the device already, my server would send the device a whole “area”, and the device would keep track of what areas it had in memory already, when they were last updated, and would occasionally request updates of the areas it knew. He thought that each area could be a whole country. The first thing that struck me about that is that if the point you asked for was in the US, you could be asking for thousands of waypoints (70,584 in the current database). That could take a long, long time on an Edge network. Then we discussed maybe breaking it down by state or province in the US and Canada.

But the thing is, I used to be a GIS (Geographic Information Systems) programmer. I know there are better ways. At first I started looking around for the HHCode algorithm since I worked with Herman Varma and the Oracle guys implementing the original Oracle “Spatial Data Option”, until that scumbag Jim Rawlings screwed me out of three months pay. But I can’t find the source code anywhere.

So my next idea was a modified quad tree. Basically, when populating the database, I made a “rectangle” that incorporates the whole world and start adding points. When I hit a threshold, I subdivide that “rectangle” into 4 equal sub-rectangles, and move the points into whichever rectangle they belong to. This means that where points are sparse, the rectangles are large, and where they are dense, the rectangles are small. That way I’ve got some consistency in the size of the file to be sent to the device, and I’m not wasting people’s time sending the 19 waypoints in Wake Island, say, as an individual file.

I’ve been experimenting today with PostGIS, which is an extension to Postgresql which adds some very efficient geographic query tools. The program I wrote to take the data from my old MySQL database and put it into the PostGIS database while building these quad cells runs pretty fast. Surprisingly fast, even. PostGIS is pretty capable. Too bad the manual for it sucks rocks.

One thing that I keep forgetting is how much faster computers are now than when I was doing GIS for a living. I keep expecting things to take hours when they end up taking minutes, because the last time I did this sort of thing I was using a 40MHz SPARC and now I’m using a dual core 1.86GHz Intel Core2 Duo, and I’ve got more RAM at my disposal now than I had hard drive space back then.

Anyway, mostly I’m writing this because I’m really enjoying working with GIS-type stuff again. I wish I could do it full time again.

I have seen the future, and it sucks

Today the developers were invited to see what our new usability expert has come up with. Evidently he hired some local company to do the graphics, and somebody else to whip it up into a fancy all-signing all-dancing Flash demo. It’s all eye candy and very little substance, and it looks childish to me. But evidently all the suits and managers love it, so it’s going to go ahead. I can’t tell you what it looks like, except the back drop looks like it was copied from the default background/splash screen/packaging of a certain fruit-based cat-themed operating system that was recently released.

The fact that the interface looks like it was designed more to impress suits than to help the people who are going to use it day to day isn’t the part that sucks. The fact that it’s all going to be written in Flash semi-sucks. The fact that it’s apparently going to be designed without talking to the people who’ve been working on the program for 6 years semi-sucks. What really sucks is that the project leader is talking about either outsourcing the entire Flash part of the user interface, or hiring their Flash programmer away from them. It was left to my cow orker Rohan to speak up and say “the reason you hired good people in the first place is that with a little training we can do anything, including Flash”.