DVD arrived, still not satisfied

After the company ordering system said that it would be several days before I got my DVD burner, my boss sent his secretary out across the street to Staples, who have one on sale for $50. I got it and installed it, but unfortunately I can’t burn to it. I get the following error:
Blocks total: 2297888 Blocks current: 2297888 Blocks remaining: 948384
Starting to write CD/DVD at speed 4 in write mode for single session.
Last chance to quit, starting real write in 0 seconds. Operation starts.
Waiting for reader process to fill input buffer ... input buffer ready.
trackno=0
BURN-Free is OFF.
Performing OPC...
Sending CUE sheet...
Starting new track at sector: 1349670
Track 01: 0 of 2635 MB written./usr/bin/dvdrecord: Input/output error. write_g1: scsi sendcmd: no error
CDB: 2A 00 00 14 98 26 00 00 1F 00
status: 0x2 (CHECK CONDITION)
Sense Bytes: 70 00 05 00 00 00 00 12 00 00 00 00 21 02 00 00
Sense Key: 0x5 Illegal Request, Segment 0
Sense Code: 0x21 Qual 0x02 (logical block address out of range) [No matching qualifier] Fru 0x0
Sense flags: Blk 0 (not valid)
cmd finished after 0.003s timeout 200s

I think I’m going to have to shut down again and check that I’ve got the devices jumpered correctly.

That burning sensation

At work I’m working on a way to upgrade our systems in the field from RedHat 7.3 and the 3.3 version of our software to CentOS 3.4 and the 3.4 version of our software, while preserving as much as possible of our current content and state.

I’m experimenting with custom install DVDs with my own kickstart file. It’s been a process of trial and error, mostly error, so I’m burning about 3 or 4 DVDs a day. Each time I have to go bother the guy with the DVD burner to make sure he’s not burning anything, then copy my ISO file over to his machine, eject his blank CD (he always keeps on in the drive in case he has to burn something when he’s not here), burn the DVD, and run over to put the blank CD back.

I put in a request for a DVD burner. They cost about $50 these days, or $60 if you need the dual layer, which I don’t. My boss approved it, and I suggested that I go out after work and buy one at Circuit City, but for reasons I don’t quite understand, we have to go through “IT Purchasing”, aka “Mordoc, Preventer of Information Technology Upgrades”. Mordoc is charging our department $129 for this burner, and I’m told I should have it in two to three weeks, by which time I should be finished this project and not need to burn DVDs any more.

If I had any need for an IDE DVD burner at home, I’d buy one, install it at work, and when I’m done with this project take it home.

I’m trying to count up how many different ways this is wasting the company’s money doing it this way, and I keep running out of fingers.

Reason 147 why MySQL is not my favourite RDBMS

There are three components to my waypoint generator

  • A set of scripts to load or reload some of the data when an update comes in.
  • A set of scripts that actually generates the databases.
  • A web interface.

All three components are written in Perl, and all access the same database. As mentioned previously, I’m using MySQL because PostgreSQL was too slow on the limited resources I have on my Linode.

Last night, I ran one of the load scripts, and while it was running I tried to access the web interface. The web interface start up accesses and updates a couple of “session information” tables, which the load scripts have no reason to access. So somebody tell me why the web interface startup timed out with the error:

[Wed Jun 08 22:08:37 2005] [error] [client 66.67.112.52] FastCGI: server "/config_backup/navaid.com/htdocs/CoPilot/index.fpl" stderr: DBD::mysql::st execute failed: Lock wait timeout exceeded; try restarting transaction at /config_backup/navaid.com/perl/WaypointDB.pm line 312.

Line 312 in WaypointDB.pm is a line that deletes from the table sess_main. And like I said, nobody else should be updating it. So why the hell should the fact that a load script is running cause a lock wait timeout on that table?

Ok, I’m an idiot and Linode is back on the table

It turns out that that test I ran yesterday that showed that Linode was even slower in mysql than it was in Postgres? Well, it turns out that I’d left the “;host=mysqldb.gradwell.net” in the connect string, so instead of hitting my local mysql database, I was actually going across the Atlantic Ocean to hit a database at Gradwell. D’OH!

I switched to using the local database, and the time came down to a slightly more acceptable 7+ minutes, but I was still I/O rate limited much of the time. Then I switched to using another guy’s database on his Linode (much better provisioned than mine) and the time went down to about 3+ minutes, and I never hit my I/O limit even once. (Which makes me think that multiple generators running at the same time won’t slow to a crawl.)

Linode probably a total washout

I’m starting to think that I won’t be able to host my application on Linode at all. Here’s the results of my latest testing:

Database Home Gradwell Linode
PostgreSQL 7:46   21:01
MySQL 0:32 1:01 42:40

The abysmal performance on the last run, MySQL on Linode, appears to be because I’ve hit some sort of I/O limiting that they do when people do too much disk I/O (i.e. swap).

I’m going to try the tests again on Linode but with the database hosted somewhere else – either at home or on my Gradwell server. Even if that works, I’m not sure what that will mean about my options.