I have always been a big fan of computer books. However, it is usually not books about a specific technology or tool that I find the most interesting, but books about the process and psychology of development.

In September last year I spent a few weeks on the West Coast of the USA, and one of the highlights was a visit to Powell’s Books in Portland, Oregon. There I had the pleasure of getting my hands on a first edition copy of Gerald Weinberg‘s excellent “The Psychology of Computer Programming” from 1971. I have owned the Silver Anniversary edition from 1996 for over ten years, but getting hold of a first edition was too good an opportunity to let go to waste.


This is the book that first featured the term “egoless programming”, the concept of separating the coder from the code and the notion that having your code criticised does not mean that you are being criticised as a person. The book contains a lot of humorous anecdotes that bring the lessons to life, such as the one regarding “egoless programming” where the programmer Bill G. (yes, an amusing coincidence) feels that his code is ready for review, and asks his colleague Marilyn B. to review it.

In this particular instance, Bill had been having one of his “bad programming days.” As Marilyn worked and worked over the code – as she found one error after another – he became more and more amused, rather than more and more defensive as he might have done had he been trained as so many of our programmers are. Finally, he emerged from their conference announcing to the world the startling fact that Marilyn had been able to find seventeen bugs in only thirteen statements. He insisted on showing everyone who would listen how this had been possible. In fact, since the very exercise had proved to him that this was not his day for coding, he simply spent the rest of the day telling and retelling the episode in all its hilarious details.

Another thing about the book that stands out for me is it’s role as a historical document of how development was done “back in the days”, way before my time. The days of Fortran, COBOL, interactive terminals, keypunch operators, print-outs, etc. In the Silver Anniversary edition which features comments on each chapter 25 years later, Weinberg states his envy of current – i.e. 1996 – tools and how they make him drool (even using the term “Drool Tools” jokingly). Today, another 17 years later, I find myself drooling over today’s tools compared to what we had in 1996. I don’t even want to think about 1971.

A previous owner has written his name inside the front cover, I believe it says Jim Campbell. And as a bonus, it also included a “While you were out” card from OSECO, 3505 Elston Ave, Chicago. This is itself is a fascinating piece of history, I suppose that a previous owner of the book used it as a bookmark. Perhaps it was Jim.


I look forward to a Gold Edition of this book in 2021!

Introducing Patina

July 16, 2013 — Leave a comment

Patina is a tool for finding occurrences of byte data patterns in one file within the data of another.

For instance, let’s say you have two files. The first file contains this:


And the second file contains:


Then a match is found for the values 12345, since the second file contains this sequence.

OK, so let’s consider another scenario. The first file contains:


And the second file contains:


Notice the missing “5” in the second file. So obviously, a match will not be found for the full sequence of “12345”. So what will Patina do? Well, it will find a match for “1234”.

Patina starts with the longest possible sequence from the first file, and looks for it in the second file. So when no match is found for “12345”, the length to look for and the next sequence to check for is “1234”, then “2345”, then “123”, “234”, “345”, etc.

However, since Patina has already found a match for “1234”, it will not also find a match for “123” or “234”. The purpose is to get the longest possible matches and ignore any smaller matches within the found match.

Any matches that are found will just be output to the screen, in a format looking like this:

Data at position 0 (16 bytes) found in 2 place(s).

Of course it’s pretty useless to just get the output to the screen, but I’ll get around to dumping it as XML or JSON eventually.

The source code is on GitHub: https://github.com/codemeyer/Patina

Since reading Phil Haack’s blog post about using a Fitbit step counter, I felt the need to try one myself. I always enjoy adding a bit of “geek” to any element of my life, and measuring the steps I take, the number of calories I burn and the number of stairs I climb seemed like just the thing. Especially when I spend my entire working day just sitting at a desk. After a few months of procastrinating I finally got a Fitbit One at the beginning of April.

As mentioned, the Fitbit One measures the number of steps I take, and how many flights of stairs I climb. This data is then presented in a pleasing manner, either in the app (for iPhone or Android) or on their web site. You get colorful graphs such as the one below, which in this case displays the steps I’ve taken on a relatively active day this summer.


Recently, I extended my Fitbit family by purchasing a Fitbit Aria scale. It measures my weight and body fat as a percentage and then uploads the data to Fitbit via Wi-Fi. Even if the body fat percentage is less than scientifically accurate, I figure that as long as the discrepancy in its measurements stays constant, I will get an indication of the general “trend”, as it were.

Currently I’m averaging somewhere around 50% more steps per day than when I started measuring three months ago, so it’s obviously working! And since I recently convinced my wife to get one, I have added a new dimension of competitiveness to it all. It’s on now!

Jaevner is a tool for exporting data from a Lotus Notes calendar and importing it into a Google Calendar. Yes, there are other tools that are available that do this. Yes, it can be done using the export and import features of Lotus Notes and Google Calendar. But I have created this anyway.

It is a simple one-way export/import. Any changes you make on the Google Calendar side will be overwritten the next time the tool is run.

At the core it has two parts: A piece of LotusScript code that runs in Lotus Notes, exporting your calendar data as a CSV file. It also starts the second part of the tool, which is a console application that reads the exported calendar data and inserts it into a Google Calendar of your choice.

The source code can be found on GitHub: https://github.com/codemeyer/Jaevner

Timeless games

April 24, 2013 — 1 Comment

I enjoy playing computer games, but I’m not a hardcore gamer by any means. I probably get a few hours in every week nowadays, and I am still fortunate (or unfortunate) to have the ability to get totally sucked into a game. There is a good bunch of games that I like to play every now and then, but at the end of the day, two games stand head-and-shoulders above others by my reckoning.

Sensible World of Soccer

Until the release of Sensible Soccer, the Kick Off series was pretty much considered the best football games for the Amiga. I remember playing it a lot with brother and enjoying it a great deal, but I also remember how difficult it was. It was very hard to have any sense of control over the ball, and the vastness of the football field meant that you had to use the radar/scanner in the corner of the screen to get a sense of where your players were.


In my mind, Sensible Soccer took the good parts from the Kick Off series and improved on pretty much everything. The graphics were more engaging – charming, even – and the fact that you always see either the penalty box or the center circle of the field means that you have a much better understanding of where your players are and what is going on.

And then there was Sensible World of Soccer which added a managerial side to the game and took it to another level. Having a 20 year career where you could choose pretty much any club team in the whole world, build that team over several seasons, move to another team or even become manager of your national side added a new dimension to the game.

I must confess I haven’t played Sensible World of Soccer much in the past few years, but I have written a program that lets you search for players based on different criteria. It has the incredibly clever name SwosPlayerFinder, and the source code is available on GitHub.

Formula One Grand Prix

Geoff Crammond is the creator of the legendary Grand Prix series of games. The first in the series was released in early 1992 and set the standard for racing games to come.


I can still remember the day in February 1992 when I was at the computer games shop. I was going to buy a game for my Amiga 500, and I had narrowed the choice down to two games: Formula One Grand Prix and Birds of Prey. I still praise my 12-year old self for going for a milestone in computer racing games instead of another bland flight simulator. Indeed, this is the game that aroused my passion for motorsports in general and Formula 1 in particular.

So where am I going with this?

First of all, it is a strange coincidence that both of these games were released in 1992. But that is of little relevance.

Yes, the graphics are dated of course. But to me, graphics are only a small part of makes a game worth playing. Playability trumps all, and these games have playability coming out of their metaphorical ears. And the conclusion to it all is that to me it doesn’t matter how good your game looks if the playability is missing. Some modern game creators should probably take note!

Back in the good old days when I started doing unit testing and test-driven development, the way I ran tests was to start the NUnit Windows application, run the tests, wait for them all to go through and examine the result. Not exactly slow, but there is some friction, and a hint of context switching as a new window appears right on top of Visual Studio.

Earlier this year I purchased ReSharper from Jetbrains. Using their test-runner certainly reduced the friction compared to NUnit, but there were still the explicit steps of writing code, starting the unit tests, etc.

And then I discovered NCrunch, and all the friction was gone.

So what is NCrunch? To quote their website:

NCrunch is an automated concurrent testing tool for Visual Studio .NET.

It intelligently runs automated tests so that you don’t have to, and gives you a huge amount of useful information about your tested code, such as code coverage and performance metrics, inline in your IDE while you type.

What this means in practice is that NCrunch runs your unit tests as soon as you edit your code. I cannot stress enough what a game-changer this is. The feedback loop is cut down to practically nothing, and there is no need to stop and wait, or launch an external program. It’s just there and does its thing, and you can keep on writing code instead of continually disrupting your flow.

Once you start using NCrunch (or a similar tool, there are others, of course), it is very hard to go back to not having it. Like when you start using ReSharper and then use Visual Studio without it, it feels like something is missing.

The Single responsibility principle states that each class should have a single reason to change. There are certainly many ways to deduce if a class has too many responsibilities, most of which actually require you to use your brain! However, a quick-and-dirty way to establish if a class has too many responsibilities is simply to look at the list of using statements at the top of the file (imports in Java and Visual Basic.NET). If you open the source file in your IDE of choice and the list of usings/imports fills your entire view, the class is likely to have a lot of reasons to change.

Here is what the list of using-directives in one of our classes looked like a few months ago:

using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Mail;
using System.Net.Mime;
using System.Text;
using System.Text.RegularExpressions;

The likelihood that a class like this was developed using test-driven development seems slim (and indeed, it was not).

I don’t really believe that we can set a strict “maximum” number of using directives to allow, but probably anything more than six or seven should serve as an indication that perhaps the class in question could do with some refactoring. Perhaps we can use the number of usings/imports as a metric describing code complexity, albeit a very blunt one.