Forth? Why Forth?

Had a dream last night, in which the programming language Forth played a big role. Which is a bit of a puzzlement because I’ve never learned to program in Forth. There was a time when Forth was tagged as the next big thing and every computer magazine had articles about it, but that was around the time when commercial software started advertising “written in C for speed” and an interpreted language like Forth didn’t have a chance. I believe Forth became the core of PostScript.

The first thing I remember from the dream is seeing two small computers with a wire connecting their ‘pin 1’s. Somebody asked the Forth guru why they were connected, and I said “I know that, it’s so their clocks are in sync” and I said something about events on rising edges. I have no idea what that’s about – I’ve never done anything that low level. Evidently my dream self has been taking electrical engineering courses.

Later I was talking to the guru in front of three real train tracks, and every time a train went by a single alphanumeric letter above the track lit up. I wonder if that is some dream reference to the famous Tech Model Railway Club, but I really only know about them from the Jargon File. I wonder if they used Forth?

Later the guru was showing me about ports to respond to external events and control things like lights and heat in a house. I distinctly remember a panel showing four room mates and an indicator of whether they were using Forth to control their thermostats or not.

Anyway, it seems odd to me to be dreaming about a programming language I’ve never used, and low level hardware stuff I’ve never done. Not sure if that’s a subconscious reflection of my recent surgery, or trying to do object oriented programming in Perl.

Perl and IDEs

From about 1987 to about 4 or 5 years ago, I did all my software development using vi (and later gvim), ctags, and all the Unix command line tools. But towards the end of my time at Kodak, I got the Eclipse religion, at least as far as doing Java. Sure, I dislike having to move my hands away from the keys to move the cursor around all the time, but the code completion, integrated debugging and all that other good stuff won me over. The ability to click on an existing method call and see the javadoc for the method and to hit F3 and be taken to the actual code was a game changer for me. So much better than ctags. But for non-Java, whether shell scripts at work or perl at home, I still relied on gvim and the other command line tools.

But I’m about to start a huge and long term perl project, a large part of which is trying to learn all I can about an existing open source code base. So I wanted to see if an IDE would give me an advantage in terms of moving around the code I’m trying to learn. I installed the EPIC plugin for Eclipse, and also a dedicated perl IDE called “Padre”, and noodled around on both, and so far I’m forced to conclude that neither of them are as useful in perl as Eclipse is in Java. The biggest missing feature seems to be that F3 gets me the wrong function or method declaration most of the time. I don’t know why, possibly the typing system in perl is too weak for the sort of analysis and introspection that Eclipse does in Java.

So I think I’m going to be back to doing gvim and ctags and find and grep and perldoc and all the other fun stuff.

This time I think it was the cache…

As I wrote about in 2007 in articles and , back in 2004 I wrote a cache for part of the product I was working on at Kodak. In the first release to QA, I made sure that area of the code got tested thoroughly, and they found a bug, and fortunately I got it fixed before it went out to the customers. But to my chagrin, my boss and other people on the project got it in their heads that somehow any problem anywhere near that part of the product must be the fault of my cache, even though time and time again it was proven that there were no further bugs in that code for the following 3+ years.

Now flash forward to the product I’m working on now. We have a “go live to the very important customer” happening in just a few days, and we’re supposed to be in code semi-freeze. But the “Performance Project” just put their performance cache into the product, evidently without giving the local QA much chance to test it before it went to the customer’s QA. That seems just a little bit dangerous to me. But no matter, they assure me they’ve written tons of unit tests. So what could possibly go wrong?

Today the customer called up saying that they’re setting up a new client on the admin site, but every time they go to the “branding setup” for that new client, they see some other client’s branding setup. This branding consists of things like the client logo and some “terms and conditions” text and the like. Since they’ve got literally hundreds of QA people hitting this site, I naturally wondered if they weren’t seeing some interaction between multiple people messing with the setup. But after hours of poking around on their site, one of my peers and I (neither of us members of the “Performance Product”, I might add) are convinced it’s the performance cache. Evidently if you use one browser to look at one client’s branding, and then use a different browser to look at the branding of the client who hasn’t been setup yet, you see the branding from the client that you’d looked at in the first browser. Somehow the cache is reacting to the absence of information in the database for a client by pulling up information from some other client out of the cache. That’s not good.

Hopefully that will get fixed, and hopefully somebody will set up a test plan that actually tests what the cache does not just on a cache miss, but also on a database miss as well. And hopefully the important customer won’t think we’re all a bunch of idiots for not testing this properly.

Jealous, much?

So less than a week after I start using my new upgraded Linux box for lots of stuff, my laptop suddenly decides not to wake up out of sleep, and when you reboot it the light comes on and you can hear some minor activity inside, but you never get the start up chime and the usual special keys to boot in diagnostics mode or single user mode didn’t work. I think it’s jealous because I haven’t been using it as much. Or maybe it’s just under more stress because I’m opening and closing the lid and moving it around instead of leaving it tethered on my desk all the time.

Vicki has been talking for a while about getting a new laptop because her old MacBookPro with only 3Gb of RAM keeps freezing up, especially when she’s doing Second Life, and especially since she “upgraded” to Lion. So we went off to the Apple store, her to get a new MacBookPro, and me to get some help from the Genius Bar.

The Genius poked around, tried a few things I’d already tried and a few things I hadn’t, all to no avail. It wouldn’t stir. So he said “well, it looks like it needs a new logic board. We had a few problems with nVidia chipsets back around that time, so I’m going to write it up as one of those even though I can’t boot it far enough to run the graphics system diagnostic.” The upshot is that I’m going to be without my laptop for a week or more, and I’m going to get a new $500 logic board for free. Not too bad, I guess. Although if they’d tried to charge me for it, I probably would have just bought a Macbook Air instead. So maybe that’s a mixed blessing.

My new system

Back in 2007, I build a new box, mostly to act as my home server. It’s been a pretty decent home server, and I hadn’t really seen the need to upgrade it. But recently, my laptop, which has been my “everything desktop” machine was showing signs of not having enough RAM for everything I do with it. Mostly because every few days I’ll notice that Microsoft’s Remote Desktop Client (RDC) is grabbing absurd amounts of RAM – I’ll notice things getting slow, see that RDC is using 1 Gb of RAM, then look back in 2 minutes and see it’s up to 1.2 Gb. Since the laptop is topped out at 4Gb of RAM, there’s nothing more I can do about the lack of RAM, except stop trying to use it for my work and my personal stuff.

So I hit on the idea of using my home server as a home desktop. I’d used it once before when my laptop was in the shop, and so I knew I could open a VPN and remote desktop into work. There were only two problems with it – it had less RAM than my laptop, and it couldn’t support 2 monitors. So I had a few choices:

  • Max out its RAM (I think it could support 8Gb) and buy a new video card
  • Replace motherboard/RAM/CPU/Video card with something more modern
  • Buy an entirely new Linux computer
  • Buy a Mac Pro

Unfortunately, the Mac Pro is *way* expensive. A new Linux computer would cost a hair over $1000, but an equivalent Mac Pro would be over $2500. I decided to re-use the old box’s case, power supply (more on that later) and disks, and just replace the motherboard, RAM, CPU and video card. I spec’ed out a bundle from my go-to supplier, J & N Computer Services:

I also bought a second LCD monitor, this one a 24″ ViewSonic to go with my 24″ Dell. The bundle from JNCS was about $745 and the monitor was about $170.

Frankly, the extra CPU is probably not all that important, since I’ve never been CPU bound before and I didn’t see that becoming a problem. But you can never have too much RAM, and 16Gb has me thinking that I might be able to run my development environment, Websphere and Oracle here instead of RDC into work. Maybe I can even run a VirtualBox or two.

Anyway, I got all that home and was setting it up, and discovered that the “connector conspiracy” has been at work again. The old power supply had a 20 pin, two 4 pin and one 6 pin power connector. The smaller ones are all pairs of +12VDC and ground. The new motherboard required the 20 pin and one of the 4 pins in the main socket, and then an 8 pin connector in the auxiliary socket. The 6 pin connector was keyed so I couldn’t use it in the 8 pin socket, and there were dire warnings about not running it with just the 4 pin. So I ran down the road to FrozenCPU and got a PC Power and Cooling “Silencer Mk II” 650W supply to replace the CoolerMaster 500 that probably would have been perfectly adequate for the job if the connectors lined up.

Anyway, as is my custom, here’s a comparison:

Old Machine New Machine
Processor 1 64 bit dual core Intel Core2 Duo E6320 1.82GHz 4MB cache 1 64 bit Quad Core Intel i7-2600K LGA-1155 3.4GHz 8MB cache
RAM 2 1Gb DDR2-800 RAM 4 4Gb DDR3-1333 Kingston RAM
Disks 2x500Gb SATA-II, 2x1Tb SATA-II 2x500Gb SATA-II, 2x1Tb SATA-II
Ports 6xUSB 2.0, 2 Firewire, 10/100/1000 Ethernet, Serial, Parallel, 6xSATA-II, Audio, Video 2xUSB 3.0, 12xUSB 2.0, 2 Firewire, GigE Ethernet, PS/2, 2xSATA 6Gb/s, 4xSATA 3Gb/s, Audio, Video, DVI, Display Port, HDMI, etc.
Fans 2 12mm case fans, 1 7mm heat sink fan, 1 12mm power supply fan 2 12mm case fans, 1 7mm heat sink fan, 1 12mm power supply fan

After I got it set up, I discovered a couple of problems.

The first problem was trying to get the second monitor set up. The “non-free” drivers in Ubuntu didn’t support this video card. I had a hell of a time getting the binary drivers from the nVidia web site to load – basically instead of just running the “.run” file that you download, I had to extract it with “–extract-only” and then run it and run it again with the “-K” option, or something like that. Whatever I did, it was a mixture of black magic and cargo culting, and it eventually worked. I had to borrow the HDMI cable off the DVD player, but we use the DVD player so infrequently that we hadn’t noticed that the cable had fallen off and had cobwebs on it.

The second problem I discovered is that a /tmp partition sized for server use isn’t big enough for interactive use – especially when you watch youtube videos (it appears to cache them in /tmp). Fortunately I used lvm, so it was possible to resize the partition. The only problem was figuring out how to boot in single user mode so I could do it without /tmp being in use.

I’m still trying to figure out how to set up the VPN tunnel to work. I copied the config files I use on the MacBook, and I copied the setup I used back when I used the Linux box to VPN into our old location in Genoa, but I couldn’t get it to work. Eventually I got it so I can open a VPN using the command line “sudo openvpn –config ~/ovpn/dmr.ovpn”. What I need to do next is figure out how I can simultaneously open a second VPN to Genoa, because our SVN server lives there and I want to be able to check stuff from there.

I also had a bit of a problem with the remote desktop client. When I first set up things, I’d open the remote desktop client “full screen”, and it would only take up one of my two screens. But I made a few minor changes (or so I thought) to my configuration, and now when I specify “full screen”, it covers both screens, which I don’t want. Fullscreen other apps only takes up one screen. So again, I resort to the command line.
rdesktop -g '1920x1080' -D -r sound:local:driver:oss -r clipboard:PRIMARYCLIPBOARD 10.255.120.119
Unfortunately that usually ends up on screen 1 instead of screen 2, so I have to do some tricks to make it work. Also, every now and then cut and paste stops working in the session, even within Windows. In that case, I have to use the Task Manager, find the “rdpclip.exe” process and kill it. That gets cut and paste working within Windows, but unfortunately kills cut and paste between Linux and Windows.

Last night I upgraded Ubuntu from 10.04LTS to 11.04 to see if it would help the rdesktop and some other minor issues. We’ll see.