Never trust the label

I’ve just wasted 4+ hours because I trusted the label that said that the CD our build-meister gave me had the latest build on it. I guess I trusted the build-meister too. I should have noticed that many of the RPMs said “3.6-006” instead of “3.6-007” like I was expecting.

Instead, I have to rebuild two systems (a CMS and a CP, as defined in the post the other day) back to RedHat 7.3 and version 3.3 of our software, configure it, burn a new DVD with CentOS 3.4 and version 3.6-007 of our software, and upgrade the two systems. See you in another 4 hours.

Oh, and did I mention that the air conditioning at work has one of its three chillers off-line, and has for the last three days, and so it’s hot and sweaty here?

What were they smoking?

Sometimes I’m forced to question the sanity of my cow orkers. If you run our setup program and choose the option to set the time and date, you are presented with a string like “062716452005.40” As near as I can figure, that’s DDMMHHmmYYYY.SS, or translated into English, day, month, hour, minute, year, period, seconds. Besides the utterly moronic order of the elements in the string, the input routine has absolutely no flexibility in what you can enter and no error checking. Get one character wrong or miss a column, and you’re going to get a date and time that are utterly unlike what you expected, and you won’t find out until you exit the setup program and type “date”.

Damn you, Linode

For the second weekend in a row, my linode node has died. This time, the linode.com web site is down as well. From what I can glean from the linode IRC channel (which isn’t on linode), about half of their servers are dead to the world.

For me, that means no outgoing email, no mailing lists, and of course my hosted web sites including navaid.com are all down. This sucks.

Last week’s outage was caused because some clueless tech at ThePlanet, which is the colo where their servers live, moved some power connections around (after being explicity told not to touch anything) and overloaded a power supply. That took several hours to resolve.

Update
It’s up again, after only 6 hours. Geez, this sucks.

Getting spammed in earnest now

After I moved my blog from MoveableType to WordPress, it seemed that the comment spammers couldn’t find my new blog. For a while there, it even seemed that referrer spam had dropped down to nearly nothing. But they’re back, with a vengeance. They’re still trying to spam my old blog which doesn’t exist anymore, and Maddy’s blog which doesn’t accept comments any more, but now they’re spamming my blog. Or attempting to, anyway. SpamKarma is catching them all, but right now it’s catching 20-30 comment spams a day.

It’s a frustrating waste of my time and resources. I don’t pay for disk space and network bandwidth so that these vandals can use it up.

Obviously there’s something wrong with the way I’m profiling perl scripts

I’m trying to reduce the memory foot print of my waypoint generation scripts. In order to see how much memory they use, I’ve been doing a (ulimit -v NNNN; ./CreateCoPilot.pl ….) and adjusting NNNN up and down to bracket where it fails due to lack of memory. I’ve had two problems with that:

  • The break point isn’t constant – a number will work on time and give me an out of memory error on the next run.
  • The numbers are nothing close to what I’d expect.

That second problem is the worst. Near the end of my script, after generating this huge (15,000 record) array of references to hashes, I sort it using the following code:

my $recordsArrRef = $self->{records};
my @newrecs = sort { $a->{waypoint_id} cmp $b->{waypoint_id} }
@{$recordsArrRef};
$self->{records} = \@newrecs;

It appears that at one point, there should be two arrays with 15,000 records in them, and yet when I benchmark the one where I’ve commented this code out against the one that has it, the unsorted one only saves 350 bytes. Ok, maybe it’s sorting in place, and all I’m saving is the actual code. Or maybe that isn’t the point of maximum memory usage. So then I looked in the Palm::PDB code, which is a library from somebody else. And at the end, after getting this array of hashes together, he goes through the array and encodes each one into binary, putting that into a different array. AHA, I thought, that means I’ve got the array of hashes and an array of encoded data records. Maybe what I should do is shrink the array of hashes as we build the array of encoded data. So I changed

foreach $record (@{$self->{records}})
{
...
$data = $self->PackRecord($record);

push @record_data, [ $attributes, $id, $data ];
}

to

while (defined($record = shift(@{$self->{records}})))
{
...
$data = $self->PackRecord($record);

push @record_data, [ $attributes, $id, $data ];
}

and I seem to have saved over 5,000 bytes. Not bad. But I think the has *got* to be a better memory profiling tool for Perl. Time to hit CPAN, I guess.