iPhone location data

Much has been made today about the fact that iPhones evidently collect some location data and store it in your backup files. I’ve only had my iPhone for a few weeks so it might be interesting to see what it’s collected so far.

Overview
All the location data for my phone
This is all the data on my phone. Although I’ve driven to Ithaca and back with the GPS on, and driven around Rochester with the GPS on, the data seems far too regular and grid like to correspond to anywhere I’ve actually been. There is a cluster of points in and around the town of Auburn NY, even though I haven’t been nearer than about 10 miles from there. There is a small smattering of points along the route between Rochester and Ithaca, but not what you’d call a smoking gun showing where I’ve been.

Rochester
Zoomed in on Rochester
Here I’ve zoomed in on Rochester, and I defy you to find some correlation between the position or size of those dots and where I’ve been since buying the phone, especially where I live.

The regularity of the grid makes me think that either the iPhone data or the analysis program is doing some sort of grouping of the data into regular intervals. Either way, I’m not sweating this.

Continuing Saga, last (I hope) episode

Continued from here. I got the disks out to the colo facility, and swapped them in. At first, things didn’t come up right because it didn’t have a network. Funny, because I’d remembered to fix /etc/network/interfaces and /etc/resolv.conf back to the values they need in the colo facility before I’d shut down at home. Grepping through the dmesg results showed that for some reason, eth0 had been renamed to eth2 and eth1 had been renamed to eth3. Something tickled my memory about the last time I’d been through this – it remembered the MAC addresses on the machine you set it up on, so when it boots the new machine it thinks “aha, I already know where eth0 and eth1 are, so these new MAC addresses need to be mapped somewhere else”. Unfortunately my own blog was down, so I couldn’t find what I’d written about this before, but a quick google on my iPhone and I removed /etc/udev/rules.d/70-persistent-net.rules, and rebooted, and it all came up.

Made sure I was talking to the net, and I could ssh into it from home, and then started up the guest domains. Made sure they were up and talking to the net as well. Made sure one of my web sites showed up on my iPhone. Buttoned up and went home.

Once I got home, I made some further checks that everything was up. As far as I can tell, it is. Now to run tiobench on the updated system.

Let’s have a look at some of these compared to the results I got with the Caviar Green disks running on the same hardware.

Test Old New
Sequential Read best rate 44.49 168.26
Random Read best rate 0.39 1.37
Read Max Latency 1036.44 743.22
Sequential Write best rate 10.36 82.46
Random Write best rate 0.09 1.76
Write Max Latency 143896.67 1748.55

Basically the huge latency will be the biggest difference. I don’t know if that’s because of the WD Caviar Green “spin down” or because there were disk errors, but either way, it’s going to be a relief to see some performance again.

Raw results after the cut.
Continue reading “Continuing Saga, last (I hope) episode”

The continuing saga of replacing disks

Well, after last Monday, I realized that I wasn’t backing up all the files I’d need on the new system, so I had to change my nightly backup. Unfortunately that caused the nightly backup to take so long that it interfered with the next night’s backup, so when I got home from 2 days in Ithaca I discovered my nightly backups were kind of screwed up. I now have that mostly fixed up.

I copied the lastest backups over to the new server and started up the three guest OSes, only to find that one doesn’t start right, because it appears to be missing scripts from /lib/init/. I copied them over from another guest and it seems to be starting, as well as I can tell.

I also recently discovered the “H” option to rsync to preserve hard links. This might be useful, although it seems to conflict with “link-dest”.

Current plan is to make another backup of the three guests, and rsync that over. Then shut down the three guests, and rsync that over. Then whip out the disks and go running over to the colo facility and try to get them in. Depending on how long the first step takes, I might wait until tomorrow.

That’s better!

On my test system, with the two new disks and with the three domU guest systems set up and running (but not connected to the outside world, so the load is obviously lower), I run tiobench in the dom0. Executive summary:
read/write rates around 5 times faster, and no awful latency. The worse latency on the Caviar disks was 121588.99 ms, and on these disks it’s 6077.56 ms.

I can’t wait to get these disks over to the colo. If all goes well, it will just be a simple matter of shutting down the domUs, making a final copy from the old disks to the new disks, and then swapping the disks.

Full details below the cut.
Continue reading “That’s better!”