First paddle of the season

Well, I got out earlier than last season, but not as early as the previous year. The sun was shining, the air was warm (just a little over 60, I think), the water was freezing cold. All in all, a great day to be out. And obviously I wasn’t the only one, because the creek was crowded with boats, some who looked like they knew what they were doing, some who obviously didn’t. Three teenagers in a canoe lurching from bank to bank with no clue what they were doing (sort of a “sub-prime” canoe), a large gaggle of kayaks coming downstream together, a guy with his feet up on top of his kayak deck and a fishing rod between his feet, people in spiffy paddling jackets and wet suits, and people in t-shirts and shorts.

I wore my wet suit because I knew the water would be cold and I didn’t want to get cold legs on the bottom of the boat, nor did I want to get hypothermia if I tipped. I had planned to only go as far as the weir so I wouldn’t overdo it. But in hindsight I probably should have turned back sooner – I was tired and my elbows were sore by the time I got there. And when I turned back, there was a strong wind in my face countering any assist I was getting from the current.

The weir was impassible – the smaller gaps were jammed with debris, so all the water was flowing through the middle channel, and there was about a foot and a half or two foot drop there. I bet it would have been fun to paddle down, but as tired as I was, I wasn’t going to try paddling up it. I wasn’t even going to try portaging around it so I could shoot it. I just looked at it and said “no f-ing way”. There were a couple of people fishing the eddy below it. So avoiding the lines, I did an eddy turn and turned down stream. I was glad to see that the big mud flat that had sprung up last year just downstream of the weir had submerged again. Hopefully the spring run-off will scour the stream bed a bit deeper this year so it won’t re-emerge in the lower water season.

Not much wildlife in the marsh yet, except some sparrows and lots and lots of Canada geese. Most of the geese looked like they were getting ready to nest, but there was one on a dead tree that lies on its side in the middle of the creek who was playing dead as I splashed by. I wonder if she had eggs? Last year I noticed that a goose had tried to lay eggs on a semi-flat spot on that tree, but most of them had rolled down into a crack, and I guess she’d abandoned the nest. I hope she has better luck this year.

Oh, that’s not good

I got an email from one of the sysadmins at NCF saying that the news directory has run out of space. After poking around a bit, I’ve discovered that:

  • cron jobs, including the nightly expire job, haven’t run since March 18th
  • I haven’t been receiving emails sent to the NCF news account, possibly for even longer than that, which is why I didn’t notice when the system throttled 3 days ago. Normally newswatcher sends these emails which I have forwarded to SMS so I don’t miss them.

The sysadmin wonders if the cron jobs not running has anything to do with the DST change. The machine is ancient, and running an ancient version of Solaris.

Of course, the fact that I didn’t notice the lack of the daily news admin email in my morning scan-and-delete folder isn’t good, either.

Unsolicited testimonial

I’ve ordered a couple of things from Duluth Trading. Mostly they make stuff for people in the building trades, but they make good looking and durable clothes. But that’s not what impresses me the most. What impresses me the most is that I choose the cheapest shipping option, and the package arrives two and a half days after I ordered it. I compare that to some site I once ordered something and chose their expensive express shipping, only to find that while they sent it FedEx Overnight, they didn’t actually give it to FedEx for 4 or 5 days.

This could work

I probably shouldn’t give too many details, but I’ve been in talks with a certain freeware developer over developing a flight planning application for a web connected hand-held device. (Anybody who knows anything about me can probably guess the developer and the device.)

My part would be a server app that would respond to requests for data from the device and send new data or updates. Nothing too different than what I’ve been doing before, but one of the things we’ve been talking about is managing “areas”. His concept was that if the user entered an id that wasn’t on the device already, my server would send the device a whole “area”, and the device would keep track of what areas it had in memory already, when they were last updated, and would occasionally request updates of the areas it knew. He thought that each area could be a whole country. The first thing that struck me about that is that if the point you asked for was in the US, you could be asking for thousands of waypoints (70,584 in the current database). That could take a long, long time on an Edge network. Then we discussed maybe breaking it down by state or province in the US and Canada.

But the thing is, I used to be a GIS (Geographic Information Systems) programmer. I know there are better ways. At first I started looking around for the HHCode algorithm since I worked with Herman Varma and the Oracle guys implementing the original Oracle “Spatial Data Option”, until that scumbag Jim Rawlings screwed me out of three months pay. But I can’t find the source code anywhere.

So my next idea was a modified quad tree. Basically, when populating the database, I made a “rectangle” that incorporates the whole world and start adding points. When I hit a threshold, I subdivide that “rectangle” into 4 equal sub-rectangles, and move the points into whichever rectangle they belong to. This means that where points are sparse, the rectangles are large, and where they are dense, the rectangles are small. That way I’ve got some consistency in the size of the file to be sent to the device, and I’m not wasting people’s time sending the 19 waypoints in Wake Island, say, as an individual file.

I’ve been experimenting today with PostGIS, which is an extension to Postgresql which adds some very efficient geographic query tools. The program I wrote to take the data from my old MySQL database and put it into the PostGIS database while building these quad cells runs pretty fast. Surprisingly fast, even. PostGIS is pretty capable. Too bad the manual for it sucks rocks.

One thing that I keep forgetting is how much faster computers are now than when I was doing GIS for a living. I keep expecting things to take hours when they end up taking minutes, because the last time I did this sort of thing I was using a 40MHz SPARC and now I’m using a dual core 1.86GHz Intel Core2 Duo, and I’ve got more RAM at my disposal now than I had hard drive space back then.

Anyway, mostly I’m writing this because I’m really enjoying working with GIS-type stuff again. I wish I could do it full time again.