First paddle of the season

Well, I got out earlier than last season, but not as early as the previous year. The sun was shining, the air was warm (just a little over 60, I think), the water was freezing cold. All in all, a great day to be out. And obviously I wasn’t the only one, because the creek was crowded with boats, some who looked like they knew what they were doing, some who obviously didn’t. Three teenagers in a canoe lurching from bank to bank with no clue what they were doing (sort of a “sub-prime” canoe), a large gaggle of kayaks coming downstream together, a guy with his feet up on top of his kayak deck and a fishing rod between his feet, people in spiffy paddling jackets and wet suits, and people in t-shirts and shorts.

I wore my wet suit because I knew the water would be cold and I didn’t want to get cold legs on the bottom of the boat, nor did I want to get hypothermia if I tipped. I had planned to only go as far as the weir so I wouldn’t overdo it. But in hindsight I probably should have turned back sooner – I was tired and my elbows were sore by the time I got there. And when I turned back, there was a strong wind in my face countering any assist I was getting from the current.

The weir was impassible – the smaller gaps were jammed with debris, so all the water was flowing through the middle channel, and there was about a foot and a half or two foot drop there. I bet it would have been fun to paddle down, but as tired as I was, I wasn’t going to try paddling up it. I wasn’t even going to try portaging around it so I could shoot it. I just looked at it and said “no f-ing way”. There were a couple of people fishing the eddy below it. So avoiding the lines, I did an eddy turn and turned down stream. I was glad to see that the big mud flat that had sprung up last year just downstream of the weir had submerged again. Hopefully the spring run-off will scour the stream bed a bit deeper this year so it won’t re-emerge in the lower water season.

Not much wildlife in the marsh yet, except some sparrows and lots and lots of Canada geese. Most of the geese looked like they were getting ready to nest, but there was one on a dead tree that lies on its side in the middle of the creek who was playing dead as I splashed by. I wonder if she had eggs? Last year I noticed that a goose had tried to lay eggs on a semi-flat spot on that tree, but most of them had rolled down into a crack, and I guess she’d abandoned the nest. I hope she has better luck this year.

Oh, that’s not good

I got an email from one of the sysadmins at NCF saying that the news directory has run out of space. After poking around a bit, I’ve discovered that:

  • cron jobs, including the nightly expire job, haven’t run since March 18th
  • I haven’t been receiving emails sent to the NCF news account, possibly for even longer than that, which is why I didn’t notice when the system throttled 3 days ago. Normally newswatcher sends these emails which I have forwarded to SMS so I don’t miss them.

The sysadmin wonders if the cron jobs not running has anything to do with the DST change. The machine is ancient, and running an ancient version of Solaris.

Of course, the fact that I didn’t notice the lack of the daily news admin email in my morning scan-and-delete folder isn’t good, either.

This could work

I probably shouldn’t give too many details, but I’ve been in talks with a certain freeware developer over developing a flight planning application for a web connected hand-held device. (Anybody who knows anything about me can probably guess the developer and the device.)

My part would be a server app that would respond to requests for data from the device and send new data or updates. Nothing too different than what I’ve been doing before, but one of the things we’ve been talking about is managing “areas”. His concept was that if the user entered an id that wasn’t on the device already, my server would send the device a whole “area”, and the device would keep track of what areas it had in memory already, when they were last updated, and would occasionally request updates of the areas it knew. He thought that each area could be a whole country. The first thing that struck me about that is that if the point you asked for was in the US, you could be asking for thousands of waypoints (70,584 in the current database). That could take a long, long time on an Edge network. Then we discussed maybe breaking it down by state or province in the US and Canada.

But the thing is, I used to be a GIS (Geographic Information Systems) programmer. I know there are better ways. At first I started looking around for the HHCode algorithm since I worked with Herman Varma and the Oracle guys implementing the original Oracle “Spatial Data Option”, until that scumbag Jim Rawlings screwed me out of three months pay. But I can’t find the source code anywhere.

So my next idea was a modified quad tree. Basically, when populating the database, I made a “rectangle” that incorporates the whole world and start adding points. When I hit a threshold, I subdivide that “rectangle” into 4 equal sub-rectangles, and move the points into whichever rectangle they belong to. This means that where points are sparse, the rectangles are large, and where they are dense, the rectangles are small. That way I’ve got some consistency in the size of the file to be sent to the device, and I’m not wasting people’s time sending the 19 waypoints in Wake Island, say, as an individual file.

I’ve been experimenting today with PostGIS, which is an extension to Postgresql which adds some very efficient geographic query tools. The program I wrote to take the data from my old MySQL database and put it into the PostGIS database while building these quad cells runs pretty fast. Surprisingly fast, even. PostGIS is pretty capable. Too bad the manual for it sucks rocks.

One thing that I keep forgetting is how much faster computers are now than when I was doing GIS for a living. I keep expecting things to take hours when they end up taking minutes, because the last time I did this sort of thing I was using a 40MHz SPARC and now I’m using a dual core 1.86GHz Intel Core2 Duo, and I’ve got more RAM at my disposal now than I had hard drive space back then.

Anyway, mostly I’m writing this because I’m really enjoying working with GIS-type stuff again. I wish I could do it full time again.

I have seen the future, and it sucks

Today the developers were invited to see what our new usability expert has come up with. Evidently he hired some local company to do the graphics, and somebody else to whip it up into a fancy all-signing all-dancing Flash demo. It’s all eye candy and very little substance, and it looks childish to me. But evidently all the suits and managers love it, so it’s going to go ahead. I can’t tell you what it looks like, except the back drop looks like it was copied from the default background/splash screen/packaging of a certain fruit-based cat-themed operating system that was recently released.

The fact that the interface looks like it was designed more to impress suits than to help the people who are going to use it day to day isn’t the part that sucks. The fact that it’s all going to be written in Flash semi-sucks. The fact that it’s apparently going to be designed without talking to the people who’ve been working on the program for 6 years semi-sucks. What really sucks is that the project leader is talking about either outsourcing the entire Flash part of the user interface, or hiring their Flash programmer away from them. It was left to my cow orker Rohan to speak up and say “the reason you hired good people in the first place is that with a little training we can do anything, including Flash”.

Developer dumbassedness

Every morning, there is an ISO of the new build of our software in the drop box. If any of your code is new in this build, you’re expected to “integration test” your code to make sure it at least doesn’t make anything worse. Most people do it on the Integration Test Plex but some of us have our own mini-plexes cobbled together out of obsolete equipment.

This morning, I installed the ISO on my mini-plex as per usual. Only problem: my entire content directory was missing, and none of the software would run because it had evidently been compiled with Java 1.6 and the mini-plex has Java 1.5 installed on it. It turns out that one of the developers who doesn’t work with us very closely put together a new installation procedure that requires a special DVD instead of our normal installation procedure that is supposed to reformat our content directory as XFS, and upgrade it to CentOS 5.1. When I complained that nobody told us that we needed to follow a special upgrade procedure, he said “why didn’t you wait until it passed integration testing?”. Because I was trying to integration test it, dumbass!