Mailing lists moved.

I’m home helping Vicki recover from her surgery. It appears to have gone well, and she’s not needing me to do much to help her – I expected her to be lying in bed all weekend weakly quavering out “bring me a cup of water, please”, but instead she’s sitting in her usual chair tapping away on her iBook.

While I’m home, though, I took the opportunity to move all my Mailman mailing lists from my home server to my linode. It was incredibly simple, if a bit time consuming. For each list, I copied over the /var/lib/mailman/lists/listname directory and the /var/lib/mailman/archives/private/listname.mbox/listname.mbox file. Then I fixed the permissions with
chown -R list.list lists/listname archives/private/listname.mbox
and fixed the internal pointers and stuff using
withlist -l -r fix_url listname
Then I trimmed down the archives (I don’t have enough room on the linode for the whole thing) using
mbox-purge --before 2004-01-01 archives/private/listname.mbox/listname.mbox
and rebuilt the archives using
su list -c "/var/lib/mailman/bin/arch listname /var/lib/mailman/archives/private/listname.mbox/listname.mbox"
and regenerated the aliases with genaliases. Then I went back to my home machine and removed the mailing lists there and put all the mailing list addresses in /etc/postfix/relocated.

Easy as pie. And now I don’t have to worry that the mailing lists will be down while we’re moving.

Finally levelling off!

Back in December, the systems administrators at The National Capital Freenet gave me a new disk for the news spool. Since that time, the free disk space has been hovering around 75%, which is kind of ridiculous. So around the middle of ‘Week 16″ in the following graphs, I extended the retention of a bunch of newsgroup hierarchies (mostly the big8, the local ones, and Canadian regional heirarchies) by a couple of days. After waiting a week to see where it stabilized, it made almost no difference – it was now hovering around 70-72% free. Big fucking deal.

So on Week 18 I took a more drastic cut, and added 10-15 days to the retention time of all those groups. It has been scary for the last couple of weeks watching that graph on a steady downward trend, holding my breath hoping it would level out before it hit bottom.

Well, here it is Week 20, and it looks from the graphs that I’ve levelled out at around 50% free. That’s much better. That gives me room to deal with floods, but keeps most groups as long as is practical. Personally, I’ve never seen the point of keeping any groups except *.answers for more than 30 days – you know that if you chime in on a discussion that ended 30 days ago you’re going to be pissing off the majority of people who read it and moved on. And every point that’s made in a discussion that’s been going on for more than 30 days will have been repeated dozens of times in the last 30 days.

Anyway, I find these graphs fascinating. On the Monthly graph you can see this lovely sawtooth as the spool fills up during the day, then quickly goes the other way during the expire run. You can see the wierd little mini-spikes when I made my adjustments and run the news.daily expire run a couple of times in one day to make sure I hadn’t messed anything up, and then you can see the much smaller spikes while the groups whose retention time was increased stopped expiring anything while groups that I didnt’ mess with continued to expire things.

Free Disk-Space for ‘/usr/lib/news/spool/articles’ on theodyn

I like these graphs, and I wouldn’t mind having them for my home system and my linode, but on the other hand what I’ve seen of MRTG looks way too complicated for something as simple as monitoring disk space.

Last night’s discoveries

  1. Back massages are wonderful, both for relieving back muscle strain caused by moving heavy computer equipment around and for giving you time totally disassociated from everything to think.
  2. Doing editing of massive MySQL dump files to turn them into files that Postgres can read, and loading them into Postgres to test, on a linode with 64Mb of memory and a shared processor does not make sense when you have a local machine with Postgres on it, 1024Mb of memory and two processors.
  3. The perl script that I downloaded from SourceForge to convert MySQL dump files into Postgres dump files SUCKS ROCKS and I’m getting much better results from my own little sed script.

That is all.