This is worrisome.

Update: Somebody on the Nutch mailing list pointed me towards the config option “fetcher.threads.per.host”. Increasing that to 10 dropped the time from 45 minutes to 15 minutes on the first crawl and 2 minutes for a re-crawl. Since I fixed Nutch to properly respect the Last-Modified header and If-Modified-Since, I don’t think I’m going to be blocked from crawling sites with multiple threads. Much less worrisome.

Time spent to copy all the files on three small web sites to a directory on my machine using wget: 1 minute 1.114 seconds.

Time spent for Nutch to re-crawl those same web sites: 45 minutes.

It doesn’t seem to matter what I put in the “number of threads” parameter to Nutch, either – it takes 45 minutes if I give it 10 threads or 125 threads.

Even worse for Nutch, out of the box it refetches documents even if they haven’t changed – I had to find and fix a bug to make that part work – but wget does the right thing.

Considering that all I’m doing with the Nutch crawl is going through the returned files one by one and doing some analysis and putting those results in a Solr index, I wonder if I should toss Nutch entirely and just work up something using wget? All I’m really getting out of Nutch is pre-parsing the html to extract some meta data.

Too bad I’ve already spent 3 weeks on this contract going down the Nutch road. At this point, it would be too time consuming to throw away everything I have and start afresh.

Oh yeah, /tmp is *temporary*

I was storing some files that were semi-important to the project I’m working on in /tmp. I knew that there is a process on some Unix computers that cleans out the stuff in /tmp either on boot or on a schedule, but I didn’t know if it did that on my Mac. So while I’d sort of had a flag in the back on my head to move that to somewhere less fragile, I never got around to it. And I got to working on another part of the project for a few days and forgot about them. And in the mean time, the files haven’t been touched, and I’ve installed an OS update and rebooted. And now I go back, and they’re gone. “Oh yeah”, I think, “/tmp is temporary”. So then I look to see if Time Machine has a backup, and of course Time Machine excludes /tmp because, oh yeah, /tmp is *temporary*.

I can recreate the files, but it’s a waste of a few hours. This time I’m going to recreate them in ~/data/.

This week’s interesting discoveries about Nutch

I’ve been working a lot with Nutch, the open source web crawler and indexer, and the first thing I found was that it was downloading web pages every day, instead of sending the “If-Modified-Since” header and only downloading ones that changed. Ok, I thought, I’ll fix that – since the information I want isn’t in the “datum.getModificationDate()”, I’ll use “datum.getFetchDate()”.

Second interesting discovery: Nutch then doesn’t index pages that returned 302 (not changed), and since the index merging code doesn’t seem to work, I can’t these pages that I cleverly managed to avoid downloading. Ok, I’ll fix IndexMapReduce and delete the code with the comment that says “// don’t index unmodified (empty) pages”, and resist the urge to send a cock-punch-over-ip to whoever wrote that comment for not realizing that “unmodified” does not mean “empty” by any stretch of the imagination.

Third interesting discovery: It turns out that some bright spark decided that when you’re crawling a page that’s never been loaded before, “datum.getFetchDate()” gets the current time, instead of any useful indication that it’s never been fetched before. So scratch my first fix, and go looking for why datum.getModifiedDate() isn’t set. And discover that it appears that datum.setModifiedDate() is never called except by code trying to force things to be recrawled. Yes, instead of forcing a new crawl by modifying the locally generated “fetch date”, they fuck around with the “modified date”, which is supposed to come originally from the server. My opinion of the quality of this crawler code is rapidly going down hill. But my patch to set the modification date according to the page’s metadata appears to be working. Sort of.

Fourth discovery, and one I can’t blame on Nutch: My Rochester Flying Club pages use shtml (Server Parsed HTML) so that I could include a standard header and navigation bar in each page. I could have used a perl script to automatically insert the header into the pages and regenerate them whenever anything changed, but this seemed a lot easier at the time. But one consequence that I’d never noticed before – the server doesn’t send a “Modification-Date” in the header meta data, so evidently these pages are never cached by any browser or crawler. Ooops.

How not to drum up business

There is a business here in Rochester that needs a lesson how to do business. I’m not going to give them the exposure (or Google rank) of putting their name here, but their name sounds a little like “Cock fire”. The business they are in is something that is actually of interest to me, something I currently use, and something that I recently solicited quotes from numerous companies in the business by going to a site that collects your requirements and sends them to registered providers. It’s also a business that members of a Linux Users Group, such as our own Linux Users Group of Rochester (LUGOR) might be more likely than the general public to want to do business with.

But “Cock fire”, instead of waiting for requests for quotes, or introducing themselves to the LUGOR group as a peer or contributor, instead decided to somehow mine our mailing list for email addresses, and then individually spammed the members of the list. When I got mine, I actually thought it was somehow related to my earlier request for quotes, until I realized that they’d sent it to both of the email addresses I’ve subscribed to the mailing list, not just the one I’d used in the RFQ. And then somebody else on the list mentioned that they’d gotten this spam to an address that they *only* use for the LUGOR list, and several other members piped up that they’d also gotten spammed, so we figured out what they’d done.

So well done, “Cock fire”. In spite of the fact that your product is actually $10 a month cheaper than what I currently pay your competitor, I’m not going to switch my existing use over, and neither am I going to recommend you to my current employer. Reap what you sow, assholes.

Update: I got a response from the email I sent them.

On behalf of [Cock fire] I would like to formally apologize for the e-mail marketing to your group. I was given 2 lists of e-mails from a Rochester Linux guru that said their group would be interested in Rochester based services. From the number of negative responses I have gotten back this was a mistake.

We have deleted all LUGOR e-mails and will not be in future communication. Please convey our apology to the group.

So it wasn’t his fault that they decided to spam, it was the fault of somebody who tempted him into it by giving him a list of email addresses. Oh, then that makes it all ok then? I don’t think so.

Obviously some sort of scam, but what?

We just got a voice mail welcoming us to a new service at Frontier. The call came from 570-631-4560, and the message said to call 888-791-9198. Obviously I was suspicious because we didn’t have any new service, and so I checked on-line and none of those numbers agree with anything that Frontier normally uses. I called Frontier’s advertised customer support number, and sure enough there was no change on our account and nothing to welcome us to. So obviously the point of the scam was to get you to reveal details of your phone account to some nefarious third party, but I wonder why?

I don’t have to ask if people are stupid enough to call somebody they think is the phone company, and not get suspicious when they don’t know your name and address based on your phone number, because there is ample evidence of just how stupid people are all over the internet.

If you get a call from these scammers and you’re smart enough to not call them back without first googling it, I’m hoping that by posting it here somebody will find this warning. Good luck.