After almost five years, I finally decided to pave my Boot Camp installation of Windows 10 on my MacBook Pro. After all, it had started life as a Windows 8 installation, and there had been lots of stuff installed on it during these years, so I figured that starting with a clean slate was the way to go. Certainly, five different versions of Visual Studio (though not at the same time, thankfully) leave a lot of stuff behind.

Reinstalling Windows 10 with Boot Camp was smooth and unproblematic. However, once I installed Office 2016 (as part of my Office 365 subscription) and tried to run Word, I got this error message:

winword-unable-to-start

I looked around online for some clues, tried the basic stuff such as uninstalling and reinstalling, but no luck. I also ran some troubleshooting wizards from Microsoft’s site, again without success.

Finally, I found a hint in a thread on Reddit, saying that:

There was a GPO in place that seems to break Microsoft Office apps. It’s an Admin template that enables remote link to remote target symbolic links. For reasons that I cannot say, this prevents office applications from loading. Disabling this GPO allows the programs to open without a problem.

GPOs, or Group Policy Objects, are a feature of Windows that controls the environment of Windows, and can be managed globally, for example if the computer is part of a domain. That this GPO was involved in the problem made some sense, since according to the NTSTATUS error codes page on MSDN, error 0xc0000715 is related to symbol links that cannot be followed because their type is disabled:

0xC0000715 STATUS_SYMLINK_CLASS_DISABLED
The symbolic link cannot be followed because its type is disabled.

However, if the problem was related to a type of symbolic link not being enabled, how would disabling the type solve the problem? I wasn’t sure, but of course I was eager to give the proposed solution a try, but in reverse. The exact name and location of the Group Policy Object wasn’t specified in the post, but finding it in the Group Policy Editor was rather straightforward: Computer Configuration > Administrative Templates > System > Filesystem > Selectively allow the evaluation of a symbolic link.

gpo-configuration

So I gave editing the GPO a go, enabling the Local Link to Local Target type (as you can see in the screenshot above). After a reboot… success! All Office programs now work.

A few days later I installed Office 2016 on my gaming computer, and this time it just worked. Good thing too, since it’s a Windows 10 Home machine, and the Home edition of Windows doesn’t contain the Group Policy Editor…

Introducing the ArgData API

February 22, 2017 — 1 Comment

I have already written about my love for Microprose Formula One Grand Prix, so I won’t go into that again. But today is special, because it marks the 25th anniversary of me purchasing the game for the Amiga. Ah yes, on 22nd February 1992, a gawky twelve-year-old bought a game that would forever change his life! Good times!

To celebrate that momentous occasion, I figured that I should try to release some of the stuff I’m working on related to F1GP, and so here it is: ArgData!

The ArgData API allows you to edit a lot of things related to F1GP, such as car colors, driver performance, player horsepower levels and various settings. The full list of features is available both at ArgData’s own site, and at its GitHub page. What the API does for you is provide a class library with helpful classes and methods for updating all sorts of data, so that you don’t have to know the exact byte location of stuff inside F1GP’s GP.EXE file.

As an example, this is how you would update the player’s (i.e., your) horsepower level, “cheating” to make you faster in a straight line than any of the AI cars 🙂

var exeFile = GpExeFile.At(@"C:\Games\GPRIX\GP.EXE");
var writer = PlayerHorsepowerWriter.For(exeFile);
writer.WritePlayerHorsepower(999);   // default is 716, LOL

Surely you can agree with me that this is easier than knowing that you should write the value 22,610 at byte position 19,848? Well, that’s one example, and I won’t go into too many details here regarding how the API is used, the other sites provide that information. There is even a reference section at the ArgData site. Lots of stuff in the documentation needs improving, but it’s a start… And this is just version 0.14 of the API, there’s still a bit of stuff that can and will be added.

On another note, there may (or may not) be some stuff related to the ArgData project that is worthy of future blog posts, so if I get around to it, maybe I will fill in some details here and there in due time. Build server setup, automated tests, publishing to NuGet, etc.

But until then, go ahead and grab ArgData from NuGet if you feel like it, and start messing around. To get started, you can follow the Quick Start guide and change the top of the color at the top of the sidepods to make the 1991 Jordan go from the one on the left to the one on the right. Fancy stuff, eh?

jordan-editing

Links

ArgData’s website: http://manicomio.se/argdata/
NuGet package: https://www.nuget.org/packages/ArgData/
GitHub source: https://github.com/codemeyer/ArgData

 

For my next web site, I intend to try something extraordinary special plain weird. I will go down the security-through-obscurity approach to user accounts and logins.

It’s an idea I have had for a while, and I’m sure that it’s been implemented elsewhere in various shapes and forms, but I figured that I will give it a go. And I’m only considering it because the web site I’m thinking of building will not hold any mission-critical, sensitive or otherwise volatile information.

The idea is simply this: Abandon logins and use a unique id/hash/token/whatever-you-want-to-call it. So basically, you go to the website and say “Hey, I want to create an account, here’s my e-mail address!”.

What the website will reply is simply:

Hello user@example.com

We have created an account for you. Go here to start using our service:

http://randomcodemeyerservice.com/223bf8191b494266bdb912d6b292fbee

Yes, the only thing you need to login is the URL. Make sure to bookmark it so that you don’t forget it. And in case you forget it, the service will have e-mailed it to you.

The advantages? No need to login, just visit the site and start using the service. No need to remember passwords. Just visit the URL and you’re off!

The disadvantages? Others can guess the URL and “hack” you. But they are unlikely to suceed for a good while. And if they do, it’s just non-mission-critical, non-sensitive, non-volatile data that they get a hold of anyway. In my case, I imagine something like an RSS feed reader or something similar.

What happens if you accidentally lose your unique URL? Well, the service has your e-mail address, so it can easily send it to you.

For a service where I would like to share data with other people, it would be totally feasibly to generate other unique URLs for those users that will lead to the same account data, but with different access levels.

I don’t know… Could it be stupid enough to actually work? I will try it some time to find out.

At a previous job I was given the mission of constructing a development task that we could let prospective team members complete. Having been on the receiving end of several of these tasks, I had a rough idea of what to include and what to avoid. Basically, I was looking for the smallest possible task that would still give a good glimpse into the developer’s way of constructing code and solving problems.

So I went with FizzBuzz. Or rather, a variation on FizzBuzz. But first, the basic idea of FizzBuzz is to write a program that prints out numbers from 1 to 100. For multiples of three, the number should be replaced by “Fizz”, for multiples of five, the number should be replaced by “Buzz” and for multiples of both three and five, write “FizzBuzz”.

1
2
Fizz
4
Buzz
(etc)

Usually whenever someone writes a blog post about FizzBuzz, the comments fill up with examples of how to do it. This is all very well, but my favorite implementation has to be the quite fantastic FizzBuzz Enterprise project on GitHub.

public static boolean numberIsMultipleOfAnotherNumber(int nFirstNumber, int nSecondNumber) {
    try{
        final int nDivideFirstIntegerBySecondIntegerResult =
            (IntegerDivider.divide(nFirstNumber, nSecondNumber));
        final int nMultiplyDivisionResultBySecondIntegerResult =
            nDivideFirstIntegerBySecondIntegerResult * nSecondNumber;
        if (IntegerForEqualityComparator.areTwoIntegersEqual(nMultiplyDivisionResultBySecondIntegerResult, nFirstNumber)) {
            return true;
        } else {
            return false;
        }
    } catch( ArithmeticException ae ){
        return false;
    }
}

Pure enterprise at its best!

Anyway, the core of our task was doing FizzBuzz as a C# console application. The added twist was that there should be unit tests that fully covered the logic of which text to print. As an aside we mentioned that it was not necessary to test the console client portion of the code. This was basically inserted to see whether the applicant would leave the 1 to 100 loop untested. Most did, and in my evaluation I did not hold it against them – but it was interesting to see who went the extra mile to make the loop-logic testable as well.

During my time of evaluting solutions I don’t think that we ever actually got one that failed the basic “1, 2, Fizz, 4, Buzz” part of the task. We did, however, get some less-than-satisfactory ways of unit testing parts of the logic. For instance, one candidate only included a single unit test that checked that the modulo operator returned the expected result. Sorry, you fail the test!

It was also possible to fail by writing too many tests, but this was generally preferable to writing too few. A worse way to fail was to write convoluted code with weird flow control statements, etc. The basic FizzBuzz logic isn’t that complicated, so it should be possible to express it in a succint way.

Now, this test was just a small part of the recruitment process, and other criteria weighed far heavier, but sometimes terrible performance on the FizzBuzz test made us skip a candidate. This may very well mean that we missed out on good people, since the test is far from a perfect indicator of future performance. And unfortunately I never had the chance to discuss the solutions with the developers, talking about how they had approached the problem, how they reasoned when writing the code, and so forth. But overall I think it worked out pretty well, and since no two solutions looked the same, it was always interesting to investigate, evaluate and learn.

Finally, our task did not use the words “fizz” and “buzz”, thereby decreasing the potential for finding a solution using Google. No cheating!

I have a few private repositories at GitHub, and I was looking to set up a continuous integration/build solution for one of them. If it had been a public repository I would probably have looked at Travis CI or AppVeyor, which are free for open source projects. However, they require you to fork out a bit of cash for private repos. I usually don’t mind paying for software, especially not good software, but since I already have a server running at home, I figured that TeamCity could do the job nicely – it is free if you have less than 20 build configurations, it has a lot of great features and I am quite familiar with it after using it at work for the past year.

So the basic need I had was for TeamCity to be able to access and fetch the code for a private GitHub repo. Now, there are a few different ways of setting up access to a private GitHub repo, and having investigated the different available options I settled on using what GitHub calls a deploy key. In short, this is an SSH key that gives full access to the repository.

This guide will assume that you have a repository in GitHub, and that you have a TeamCity project set up.

Generating the SSH key

So let’s start by creating the SSH key that we will use. The simplest way I could think of was to use the PuTTYgen tool. If you don’t have it, you can find it a the PuTTY download page, or – if you are running Windows – by installing PuTTY through Chocolatey.

Once you’ve generated the key, copy all the text from the “Public key for pasting into OpenSSH authorized_keys file” onto the clipboard. This will go into your GitHub project settings. We’ll need the private part of the key later, so save it to disk by clicking on “Save private key”.

Setting up your GitHub project

Next, go to the Settings page for your GitHub repository. In the menu on the left, you’ll find the Deploy Keys entry. Go there and click on the “Add deploy key” button. Enter a title for your deploy key and then paste the contents of your clipboard into the “Key” part of the form.

add-deploy-key

Click on “Add key”. There, you’ve just added a deploy key to your project in GitHub. Now for TeamCity…

Setting up TeamCity

TeamCity will be using the private part of the SSH key, the one that we saved to disk earlier on. First, go to your project settings page. Go to the SSH Keys menu item, and click on “Upload SSH key”. You can give the key a title, or just select the file and use the filename.

upload-ssh-key

Click “Save” and it will be added to your project. However, there’s one more thing you have to do.

Whether you’ve already set up the VCS root for the TeamCity project or not, what you need to configure in the VCS root is under the Authentication Settings section. For Authentication method, choose “Uploaded Key”. The Username must be set to “git”, and in the Uploaded Key dropdown you simply select the SSH key that you just uploaded.

teamcity-authentication-settings

We’re done! To try it out, scroll to the bottom and click on “Test connection”. Hopefully, you’ll see this:

teamcity-test-connection

Success!

If not, make sure to double-check all the settings. Also, bear in mind that the SSH key allows full access to the GitHub repository, so make sure that you keep it in a safe place.

TeamCity can now get the source code from GitHub for your project. Now there’s only the matter of writing the software that TeamCity will continuously integrate – and perhaps deploy – for you! But that’s the easy part, right?

Making small commits often is usually preferable to making massive once-a-day commits – especially if you want someone to review your changes. For me, my stress levels rise with the number of files that have been touched since the last commit. When commits grow too big I’ve been known to stash the code, and redo the changes, breaking up what would have been a large commit into a number of smaller commits. Yes, sometimes large commits are inevitable. They happen. Things could be worse.

By worse, I mean like the two types of commits that have been bothering me on occasion lately.

The first is the one where the commit strays off topic, delving into multiple features at once. For example, you find a commit with the following description:

Added ATOM parsing to RssFeedReader
Validation of ExpirationDate for user account

OK, yeah, those two changes seem totally related. Or maybe they should have been two separate commits? Imagine that we wanted to merge one of these changes to another branch. The Single Responsibility Principle probably applies to commits as well.

However, there is another type that actually bothers me more: The overzealous drive-by code cleaning.

Imagine that you’re looking at the history of the UserAccount.cs file. One of the changes has the following commit message:

Added ATOM parsing to RssFeedReader

Why would RSS parsing affect the UserAccount class? You’re sitting there, scratching your head, wondering why a completely unrelated file has been touched. Upon closer inspection you discover what changed. One line. One unused using statement was removed.

Indeed, that is quite unrelated to adding ATOM parsing to an RSS feed reader. So not only has the history of the UserAccount.cs file been distorted, it was done for pretty much no business value whatsoever.

Now, maybe I’m nitpicking. By all means, clean up the code. But try to keep each commit focused and coherent, instead of going off fixing “problems” in unrelated files. Small fixes like adjusting spacing and removing unused using statements should really only be done if you already had another reason to fiddle with the file. Alternatively, do all the tiny fixes together in one go without changing anything else. Then you can just check the changes in with a suitable comment like “Removing unused using statements”. Accurate, succint and to the point. Clean code. Even cleaner source control history.

Only Sixty Percent Finished

October 31, 2013 — 1 Comment

In the days before the Internet, computer game reviews were written in dead-tree magazines. Naturally, magazines had to go to press a few weeks or so before they would appear in shops. So if a game was to be released for Christmas to cash in on season sales, the game had to be reviewed in magazines that were in the shops earlier in December. This may very well have meant that the game had to be sent to magazines to be reviewed as early as October or November. Game development, like most other sorts of software development, tends to go on until the very last moment. So what happened if a game wasn’t quite finished when it was time for reviews? Simple, hand the magazines an incomplete version!

Amiga Power reviewed Super Stardust in October 1994, long before any other magazines got to try it. Four yellow headers appear throughout the review. When rearranged, they spell out:

only-sixty-percent-finished

Quite why they left such an obvious clue to the fact that the reviewed game only contained three of five levels is a good question, perhaps to tease other magazines since they beat them to a review by several months.

Sensible World of Soccer fared similarly. Amiga Magazine Rack – a website dedicated to scans of old Amiga magazines, and general keeper of knowledge – tells the story:

The biggest reader backlash in Amiga Power’s history was due to reviewing the unfinished game Sensible World Of Soccer in AP44. It was awarded 95% and declared “The best Amiga game ever.”

A flood of complaints rolled in regarding bugs in the game. AP came clean regarding the review and invited Sensible Software to address the complaints. Chris Chapman and Jon Hare answered the criticisms, which Stuart Campbell in his role as Sensible’s Development manager assembled into an amusing column called “Swiz” in AP48 on pages 24-26.

It was unfortunate that the answers were made funny as they left Sensible Software looking arrogant, with a majority of the responses along the lines of “we had to rush it out to cash in on the Christmas market” or abusing the original reader.

Working on a project related to the early 90’s racing game Formula One Grand Prix made me read a few old reviews of the Amiga version on Amiga Magazine Rack for nostalgia’s sake. However, looking at the screenshots featured in the reviews, I noticed something out of place. When released, the game featured cars and tracks from the 1991 F1 season, but it was obvious from some screenshots that the version of the game that they had reviewed actually included cars from the 1990 season. Also, some reviews featured images with a cockpit that was quite different from the one in the released version.

Since the dates of the reviews vary from October 1991 to March 1992, and the game was released in January 1992, it seems obvious to draw the conclusion that most magazines received versions that were somewhat less complete than others.

I looked at nine different reviews (the ones that had scanned pages on Amiga Magazine Rack). One review was from October 1991, four of them from November, one from December, two from January 1992 and one from March. Of these nine reviews, only one featured the cars from the version that was actually released. Not surprisingly, it’s the March review in Amiga Action, appearing two months later than any other review.

We can start off with a major give-away: The Amiga Power review from November 1991 even features the words “based on statistics from the 1990 season”. So that makes it pretty obvious that the game was originally supposed to contain the 1990 season, and that the reason they changed it was that the whole of 1991 had actually passed.

Let’s look at some oddities in detail…

First of all, the cockpits, shown below for comparison. The one on the right is the released version, containing much more information about speed, number of cars, laps, information about which driver aids are active, etc. The one on the left – which is featured in many of the reviews – seems barren by comparison, just lap times, oil and fuel lights – but it does include a gear lever!

f1gp-cockpits

Seeing as the one on the right is actually more useful, it’s good to know that when it came to cockpits they didn’t remove features as time wore on, but actually added them.

Moving on, the review in Amiga Action from December 1991 features a multitude of weirdness. It has the correct instruments, but the cars are from 1990. However, it has in-game screenshots featuring drivers Pierluige [sic] Martini and Eric van de poele [sic]. That Pierluigi Martini is spelled incorrectly is almost forgivable. However, Eric van de Poele did not even race in F1 in 1990. And more interestingly: the released version of the game did not feature the names of any real F1 drivers or teams. These were supplied on a piece of paper in the game box, for the user to enter on their own.

There are lots of other examples in the reviews of cars from 1990 instead of 1991.

f1gp-benettons

On the left is an in-game screenshot of a car from one of the reviews. In the middle is an actual Benetton car from 1990, and on the right is one from 1991. Which of these does the in-game car resemble the most?

The Amiga Power review also mentions a feature that didn’t make it into the game: “Sparks fly from the cars just like the ones you see in the racing on TV”. Eh, no, the released game had no such feature.

A couple of the reviews also mention a feature where there is a TV presenter who talks about your exploits after each race.

f1gp-commentator

Sounds like it could have been an interesting feature, a shame that it didn’t make it to the final game.

A final note of disorientation: The released game has the colours of at least three teams wrong. First of all, the Tyrrell cars were dark grey in real-life whereas the the game has them painted dark blue. A team that actually was dark blue, Lamborghini’s Modena Team, features the gray-blue-white colours of the single-car Coloni entry. The Coloni on the other hand is all-yellow with blue wings. No team in 1991 had this colour scheme, but Coloni ran an all-yellow car (with black wings) in – you guessed it – 1990.

There are other things – both in the reviews and in the finished game – that are a bit wacky, but there has to be a limit to the madness (and this blog post). All in all, much confusion, but all the more merriment.

I have always been a big fan of computer books. However, it is usually not books about a specific technology or tool that I find the most interesting, but books about the process and psychology of development.

In September last year I spent a few weeks on the West Coast of the USA, and one of the highlights was a visit to Powell’s Books in Portland, Oregon. There I had the pleasure of getting my hands on a first edition copy of Gerald Weinberg‘s excellent “The Psychology of Computer Programming” from 1971. I have owned the Silver Anniversary edition from 1996 for over ten years, but getting hold of a first edition was too good an opportunity to let go to waste.

psychology

This is the book that first featured the term “egoless programming”, the concept of separating the coder from the code and the notion that having your code criticised does not mean that you are being criticised as a person. The book contains a lot of humorous anecdotes that bring the lessons to life, such as the one regarding “egoless programming” where the programmer Bill G. (yes, an amusing coincidence) feels that his code is ready for review, and asks his colleague Marilyn B. to review it.

In this particular instance, Bill had been having one of his “bad programming days.” As Marilyn worked and worked over the code – as she found one error after another – he became more and more amused, rather than more and more defensive as he might have done had he been trained as so many of our programmers are. Finally, he emerged from their conference announcing to the world the startling fact that Marilyn had been able to find seventeen bugs in only thirteen statements. He insisted on showing everyone who would listen how this had been possible. In fact, since the very exercise had proved to him that this was not his day for coding, he simply spent the rest of the day telling and retelling the episode in all its hilarious details.

Another thing about the book that stands out for me is it’s role as a historical document of how development was done “back in the days”, way before my time. The days of Fortran, COBOL, interactive terminals, keypunch operators, print-outs, etc. In the Silver Anniversary edition which features comments on each chapter 25 years later, Weinberg states his envy of current – i.e. 1996 – tools and how they make him drool (even using the term “Drool Tools” jokingly). Today, another 17 years later, I find myself drooling over today’s tools compared to what we had in 1996. I don’t even want to think about 1971.

A previous owner has written his name inside the front cover, I believe it says Jim Campbell. And as a bonus, it also included a “While you were out” card from OSECO, 3505 Elston Ave, Chicago. This is itself is a fascinating piece of history, I suppose that a previous owner of the book used it as a bookmark. Perhaps it was Jim.

while-you-were-out

I look forward to a Gold Edition of this book in 2021!

Introducing Patina

July 16, 2013 — Leave a comment

Patina is a tool for finding occurrences of byte data patterns in one file within the data of another.

For instance, let’s say you have two files. The first file contains this:

12345

And the second file contains:

0123456789

Then a match is found for the values 12345, since the second file contains this sequence.

OK, so let’s consider another scenario. The first file contains:

12345

And the second file contains:

012346789

Notice the missing “5” in the second file. So obviously, a match will not be found for the full sequence of “12345”. So what will Patina do? Well, it will find a match for “1234”.

Patina starts with the longest possible sequence from the first file, and looks for it in the second file. So when no match is found for “12345”, the length to look for and the next sequence to check for is “1234”, then “2345”, then “123”, “234”, “345”, etc.

However, since Patina has already found a match for “1234”, it will not also find a match for “123” or “234”. The purpose is to get the longest possible matches and ignore any smaller matches within the found match.

Any matches that are found will just be output to the screen, in a format looking like this:

Data at position 0 (16 bytes) found in 2 place(s).

Of course it’s pretty useless to just get the output to the screen, but I’ll get around to dumping it as XML or JSON eventually.

The source code is on GitHub: https://github.com/codemeyer/Patina

Since reading Phil Haack’s blog post about using a Fitbit step counter, I felt the need to try one myself. I always enjoy adding a bit of “geek” to any element of my life, and measuring the steps I take, the number of calories I burn and the number of stairs I climb seemed like just the thing. Especially when I spend my entire working day just sitting at a desk. After a few months of procastrinating I finally got a Fitbit One at the beginning of April.

As mentioned, the Fitbit One measures the number of steps I take, and how many flights of stairs I climb. This data is then presented in a pleasing manner, either in the app (for iPhone or Android) or on their web site. You get colorful graphs such as the one below, which in this case displays the steps I’ve taken on a relatively active day this summer.

fitbit-graph

Recently, I extended my Fitbit family by purchasing a Fitbit Aria scale. It measures my weight and body fat as a percentage and then uploads the data to Fitbit via Wi-Fi. Even if the body fat percentage is less than scientifically accurate, I figure that as long as the discrepancy in its measurements stays constant, I will get an indication of the general “trend”, as it were.

Currently I’m averaging somewhere around 50% more steps per day than when I started measuring three months ago, so it’s obviously working! And since I recently convinced my wife to get one, I have added a new dimension of competitiveness to it all. It’s on now!