Hmmm. How do I do this?

Ok, picture a network with one “controlling computer”, which I’ll call “the CMS”, and a bunch of satellite computers which I’ll call “the CPs”. These satellite computers live in projection booths in a theatre and have digital projectors hooked up to them, but that’s not important. The problem I’m dealing with is upgrading the machines from version 3.3 of our software to version 3.5. The software upgrade also necessitates an upgrade from RedHat 7.3 to CentOS 3.4.

I’ve got the upgrading of the CMS sorted (I have a non-bootable DVD with an apt repository with CentOS 3.4 and our software, and a kickstart file that does the upgrade without touching the partition with our data on it).

The CPs have hostnames of cp1 to cpN, and IPs of 192.168.30.101 and up. cp0 (192.168.30.100) is reserved.

What I’m working on now is upgrading the CPs. What I’ve been doing is making the CMS a PXE boot server, and wiping the boot partition on the CPs one at a time, re-installing them as cp0 and then when it comes back up, ssh-ing in and restoring the backed up configuration, including the hostname and IP.

The problem with that is that it takes 20 minutes per CP, and the powers that be are complaining that it takes too long. They’d like something more parallel.

So I’ve been thinking of retrieving the MAC addresses of each CP before I upgrade. Then I do them all in parallel, and use the MAC address afterwards to figure out which one is which. I understand that I can use “arp -a” to retrieve the MAC addresses. I’m wondering if there is something I can do to DHCP to give out the correct 192.168.30.1xx address to the right machine, or whether I should have DHCP hand out addresses in some other range, and then use “arp -a” again to find which machine has which address and fix them one at a time?

Yesterday

Yesterday was another back breaking and knee hurting day of getting ready for the move. This time the target was the Video/DVD/CD shelf and the book shelves. I went through the videos and DVDs and put the ones that belonged to me and I wanted to keep in one place, the ones that belonged to me that I didn’t want to keep in the garbage, and the rest in semi-categorized piles for Vicki to sort out. The biggest problem was the stuff oriented to little kids. The real crap like “Babysitters Club” and like was tossed, and the quality and semi-quality stuff like Disney movies was kept. The second biggest problem was the pile of about 15 unlabelled tapes. Most of them have been kicking around unlabelled since I moved in here 9 years ago, and Vicki has been saying for the past 9 years “I’m going to watch them and label them”, but of course nobody ever gets around to it. And since our VCR is currently doing this weird “flash of death” thing, we can’t watch them now. So they’re stored away where they’ll go another 9 years without anybody looking at them. Oh well, such is life.

After all the fun of the video collection, I moved onto the rest of the book shelves. It’s amazing what crap gets tucked into our bookshelves and forgotten. I found a girl’s swimsuit in a plastic bag with original price tags on it. I found plates and knives and forks. I found old board games that nobody has played in 10 years. Lest anybody think I’m picking on everybody else, I also found about a dozen print outs of manuals and installation instructions for computer programs that I’d obviously meant to get back to later and never did, all tucked in random parts of the bookshelf. I found school binders full of blank paper. I kept a stack about two feet high of various types of blank paper, but threw out a shocking amount. I found one of our missing copies of “Pronounced Cathouse.org” (sorry, I don’t have the actual title in front of me – I think it was a take off of a Lynryd Skyrnd album cover), which is a priceless treasure.

All in all, I think I filled up about 6 or 7 garbage bags. I also threw out a gigantic Sun workstation monitor that I borrowed from work (if anybody asks for it, I’ll claim I dropped it when moving it, which I came close to at least twice) a Mac LC-III and two old SCSI drives, and two bird gyms. And we moved a van load of boxes and two computers to the new place.

Vicki, besides doing the incredibly angst-full job of sorting videos, also worked hard up in Stevie’s room. I believe that the last time Stevie’s room was cleaned up, it was done by a chap named “Hercules”. I wouldn’t have taken on that job for any price.

Man, I’m tired

I spent the afternoon cleaning up the basement. Or rather, I’d planned to clean up the basement, and what I managed was the two computer desks and a tiny bit of the bookshelves. I filled up about 4 garbage bags and a recycling bin. I’ve got two boxes of books and CDs and the Windows box to go to the new house. Oh, and one of the computer desks can go.

I’ve put a monitor back on my Linux server (since it’s not needed on the Windows box right now), and I’m posting this using elinks. Kind of weird.

I can see the progress, but unfortunately I can also feel the pain in my knees and legs, and also all the work left to be done. It’s so discouraging. I feel like I’m going to have to take a week off work just to get the house good enough to list, and then another week to get ready to move.

Six Approaches in Six Months

It’s that time again, time to reset the clock on my instrument currency. Just about every time I go on a flying trip, I manage to get some actual IFR en-route, usually only for a few minutes here and there, but in the last 6 months I’ve only done one real approach, and of course no holds. In order to stay instrument current you have to have done a hold and 6 approaches in the last 6 months, plus “intercepting and tracking course through the use of navigation systems” (which is pretty hard to avoid if you’ve done the other bits), and since I plan to fly up to Ottawa for Canada Day weekend, I need to be current in case I do get some weather.

I wasn’t interested in doing any non-precision approaches. Ottawa and Rochester both have ILSes and frankly the whole “currency” thing is more of an exercise in being legal than in safety. (Before our trip out to Mt. Holyoke this fall I’ll probably practice a few non-precision approaches because I don’t remember what approaches they have at Barnes Muni .)

So I filed ROC-GEE-ROC, and flew out to the Geneseo VOR and did a hold there. No problem, they assigned a hold on the airway I was already on, so the entry was dead simple. One turn around, and I was ready to come back in. I asked for the ILS 28 approach.

The controller descended me to 2,500 feet and vectored me for the approach course. My first approach wasn’t bad, but wasn’t great. Both horizontal and vertical I kept within 2-3 dots. And when I “broke out” at decision height I was a bit south of the runway.

The next two approaches went much better. Vertical I kept within the donut, and horizontal I went out a dot, or a dot and a half maximum. Although I still ended up a little bit south of the runway each time.

The next approach, I got a new controller. He vectored me further out, made me to descend to 2,100 on the final turn, and then asked me to keep my speed up. I did, and I actually did a pretty good approach. Kept it in the donut both horizontally and vertically almost the whole approach.

I’m not sure what went wrong in the last two approaches. Maybe it was the new controller (who kept giving me the descent at the last turn, but started turning me in nearer and giving me an abrupt turn-on), maybe it was the fact that I adjusted the DG for precession, maybe I was overconfident, and I was trying to fly them fast again, or maybe I was just bored and tired. But both approaches I was hitting 3 or 4 dots deflection horizontally both to the left and the right. And on the final one, I couldn’t get it slowed down for the landing. After I got touched down, I couldn’t even seem to put much pressure on the brakes, and very nearly decided to go-around. I ended up rolling into the overrun area on runway 28, which is 5500 feet long.

Bear with me here…

While most of my blog entries are an example to the world on how to write in a way that can appeal to everybody, this one is going to be mostly a reminder to myself.

I’m having problems with my waypoint generator on the Linode, mostly because with only 96Mb of real memory, each individual generator task quickly becomes too big and then tasks start swapping, and everything gets horribly I/O bound.

At first it seemed that things were dying right at the very end, and so I lept to the conclusion that it must be in the sort phase, where it takes all the records that it’s retrieved from the database and stuck into an array of references to hashs, and sorts the array by ID. I solicited some opinions on that, and got some good ideas on how to sort by ID in the database while still allowing the priority of datasources that I use now. The most interesting one said

select ...
from waypoints w1
where ....
and field(datasource 'FAA', 'DAFIF', 'Thompson')
= (SELECT min(field(w2.datasource 'FAA', 'DAFIF', 'Thompson'))
from waypoints w2
where w1.id=w2.id)
order by w1.id

But before I had a chance to implement it, I did some testing on my own machine using “ulimit -v” to simulate the reduced memory size. I ran an example query that produces a result file with 71197 records in it, honing in on the minimum memory size that would allow it to finish without getting an “Out of memory” error. Then I cut out the sort stage and did it again. And what I found surprised me. Cutting out the sort stage only saved me 375 bytes, reducing the memory size from 107625 to 107250 bytes. And made the time go from 1:46 to 1:35, a scant 10 seconds or 10%.

Looks like I’m going to have to find another way to reduce the memory footprint. And I keep coming back to this idea I had where I do the sorted query and write each record out to a temporary file as I retrieve it, storing only the id, PDB “unique id”, record number and the offset from the beginning of the temporary file. Then when that’s done, I go back and write the PDB file header, and the PDB file index (which consists of the offset from the beginning of the file, attibutes, category and the unique id), and then append the contents of the temporary file. That way I can avoid having the entire contents of the database in memory.

Side note about the PDB “unique id”: Each record in a PDB file has a 3 byte “unique id”. Normally when you’re creating a PDB file, you leave that as zero and the PDA itself fills it in when it loads the file. But when Laurie Davis created the CoPilot application, it used the unique id as the key to reference the waypoint records from the flight plans. So if I did leave them as zero and let the PDA fill them in, every time you reloaded your waypoint file your flight plans would get scrambled. So I maintain a table with a unique mapping between waypoint ids and “unique ids”. That way, even if you got, say, “KROC” from the FAA data this time and from the DAFIF data next time, your flight plans including KROC would still work, because both KROC ids would get the same “unique id”. That also means every time I load new data into the database, I have to find any ids that don’t currently have a “unique id” for them and generate some new ones. Occasionally I should purge no longer used ids and re-use their unique ids, because 3 bytes doesn’t give you a lot to play with.