Damn you, Linode

For the second weekend in a row, my linode node has died. This time, the linode.com web site is down as well. From what I can glean from the linode IRC channel (which isn’t on linode), about half of their servers are dead to the world.

For me, that means no outgoing email, no mailing lists, and of course my hosted web sites including navaid.com are all down. This sucks.

Last week’s outage was caused because some clueless tech at ThePlanet, which is the colo where their servers live, moved some power connections around (after being explicity told not to touch anything) and overloaded a power supply. That took several hours to resolve.

Update
It’s up again, after only 6 hours. Geez, this sucks.

Man, I’m tired

I spent the afternoon cleaning up the basement. Or rather, I’d planned to clean up the basement, and what I managed was the two computer desks and a tiny bit of the bookshelves. I filled up about 4 garbage bags and a recycling bin. I’ve got two boxes of books and CDs and the Windows box to go to the new house. Oh, and one of the computer desks can go.

I’ve put a monitor back on my Linux server (since it’s not needed on the Windows box right now), and I’m posting this using elinks. Kind of weird.

I can see the progress, but unfortunately I can also feel the pain in my knees and legs, and also all the work left to be done. It’s so discouraging. I feel like I’m going to have to take a week off work just to get the house good enough to list, and then another week to get ready to move.

Six Approaches in Six Months

It’s that time again, time to reset the clock on my instrument currency. Just about every time I go on a flying trip, I manage to get some actual IFR en-route, usually only for a few minutes here and there, but in the last 6 months I’ve only done one real approach, and of course no holds. In order to stay instrument current you have to have done a hold and 6 approaches in the last 6 months, plus “intercepting and tracking course through the use of navigation systems” (which is pretty hard to avoid if you’ve done the other bits), and since I plan to fly up to Ottawa for Canada Day weekend, I need to be current in case I do get some weather.

I wasn’t interested in doing any non-precision approaches. Ottawa and Rochester both have ILSes and frankly the whole “currency” thing is more of an exercise in being legal than in safety. (Before our trip out to Mt. Holyoke this fall I’ll probably practice a few non-precision approaches because I don’t remember what approaches they have at Barnes Muni .)

So I filed ROC-GEE-ROC, and flew out to the Geneseo VOR and did a hold there. No problem, they assigned a hold on the airway I was already on, so the entry was dead simple. One turn around, and I was ready to come back in. I asked for the ILS 28 approach.

The controller descended me to 2,500 feet and vectored me for the approach course. My first approach wasn’t bad, but wasn’t great. Both horizontal and vertical I kept within 2-3 dots. And when I “broke out” at decision height I was a bit south of the runway.

The next two approaches went much better. Vertical I kept within the donut, and horizontal I went out a dot, or a dot and a half maximum. Although I still ended up a little bit south of the runway each time.

The next approach, I got a new controller. He vectored me further out, made me to descend to 2,100 on the final turn, and then asked me to keep my speed up. I did, and I actually did a pretty good approach. Kept it in the donut both horizontally and vertically almost the whole approach.

I’m not sure what went wrong in the last two approaches. Maybe it was the new controller (who kept giving me the descent at the last turn, but started turning me in nearer and giving me an abrupt turn-on), maybe it was the fact that I adjusted the DG for precession, maybe I was overconfident, and I was trying to fly them fast again, or maybe I was just bored and tired. But both approaches I was hitting 3 or 4 dots deflection horizontally both to the left and the right. And on the final one, I couldn’t get it slowed down for the landing. After I got touched down, I couldn’t even seem to put much pressure on the brakes, and very nearly decided to go-around. I ended up rolling into the overrun area on runway 28, which is 5500 feet long.

Getting spammed in earnest now

After I moved my blog from MoveableType to WordPress, it seemed that the comment spammers couldn’t find my new blog. For a while there, it even seemed that referrer spam had dropped down to nearly nothing. But they’re back, with a vengeance. They’re still trying to spam my old blog which doesn’t exist anymore, and Maddy’s blog which doesn’t accept comments any more, but now they’re spamming my blog. Or attempting to, anyway. SpamKarma is catching them all, but right now it’s catching 20-30 comment spams a day.

It’s a frustrating waste of my time and resources. I don’t pay for disk space and network bandwidth so that these vandals can use it up.

Obviously there’s something wrong with the way I’m profiling perl scripts

I’m trying to reduce the memory foot print of my waypoint generation scripts. In order to see how much memory they use, I’ve been doing a (ulimit -v NNNN; ./CreateCoPilot.pl ….) and adjusting NNNN up and down to bracket where it fails due to lack of memory. I’ve had two problems with that:

  • The break point isn’t constant – a number will work on time and give me an out of memory error on the next run.
  • The numbers are nothing close to what I’d expect.

That second problem is the worst. Near the end of my script, after generating this huge (15,000 record) array of references to hashes, I sort it using the following code:

my $recordsArrRef = $self->{records};
my @newrecs = sort { $a->{waypoint_id} cmp $b->{waypoint_id} }
@{$recordsArrRef};
$self->{records} = \@newrecs;

It appears that at one point, there should be two arrays with 15,000 records in them, and yet when I benchmark the one where I’ve commented this code out against the one that has it, the unsorted one only saves 350 bytes. Ok, maybe it’s sorting in place, and all I’m saving is the actual code. Or maybe that isn’t the point of maximum memory usage. So then I looked in the Palm::PDB code, which is a library from somebody else. And at the end, after getting this array of hashes together, he goes through the array and encodes each one into binary, putting that into a different array. AHA, I thought, that means I’ve got the array of hashes and an array of encoded data records. Maybe what I should do is shrink the array of hashes as we build the array of encoded data. So I changed

foreach $record (@{$self->{records}})
{
...
$data = $self->PackRecord($record);

push @record_data, [ $attributes, $id, $data ];
}

to

while (defined($record = shift(@{$self->{records}})))
{
...
$data = $self->PackRecord($record);

push @record_data, [ $attributes, $id, $data ];
}

and I seem to have saved over 5,000 bytes. Not bad. But I think the has *got* to be a better memory profiling tool for Perl. Time to hit CPAN, I guess.