The TortoiseHg Commit screen: A Love Story

This week I spent an inordinate amount of time trying to get TortoiseHg – the Tortoise-branded interface to the Mercurial version system – compiling and running on my Mac Mini so I could finally replace git as my VCS. My next post will be a howto for getting TortoiseHg running on Snow Leopard OSX, but first: a reason.


I’ve tried a lot of Version Control systems in my 8 years as a programmer: none, CVS, SVN, SourceSafe, Vault, Git and Mercurial.  I’ve noticed over time that they’ve gotten less and less…hmm, how do I put this…annoying. Every source control system gets in the way somehow, that’s the trade off: you have to break flow to commit stuff, but if you stuff-up there’s a backup around. And that break of flow has (mostly) gotten less annoying as I’ve moved to new and improved systems. CVS was command-line, SVN had a GUI, Vault has a nice inline diff, Git is…okay, git is command line again. Technically it comes with a GUI but in the same way you’d say that Halo: Reach comes with annoying douchebags online – it’s more of an annoyance than a feature. But the TortoiseHg commit screen is the best thing I’ve ever had for a versioning system, and one that finally fits into my workflow. Sorry other version control commit windows: I have a NEW girlfriend now!

Like the polish on an indie game, it’s small things that add up to make a big difference with the commit screen. The changed files are all shown on one side of the screen, and an inline diff is shown on the other side, making it very quick to see what’s changed when I’m writing comments for the commit. I can easily switch that to ‘hunk selection’ and choose only to commit certain parts of the file that have changed, and then the rest in the next commit. This is GOLD when I feel like I’ve missed a commit and want to make different commits for the separate functionalities I’ve just implemented. There’s easy access to any of the other tools from the menus (repo explorer, push and pull synchronisation etc.), and lots of ways to filter the list of changed files. And then I can right-click on a file and automatically add it to the .hgignore file, see it’s history, copy it, rename it, edit it. I truly love a commit screen that lets me do the little changes I need to get that commit right without having to go back into my project, stop whatever I was in the middle of and make the changes there.

But the most important thing is that the screen stays open. Nothing annoys me more than committing a file or two, but not all of them, and then having to go back through TortoiseSVN’s right-click menu or git’s command line in order to get the commit window back. And to commit, there’s a nice big button but even better a Ctrl+Enter straight after typing a comment will do the commit for me.

Thanks to TortoiseHg, my version control really feels like a part of my workflow now. It’s not perfect (an ability to select more than one file at once is sorely missing), but it’s better than anything I’ve used before.

TortoiseHg commit window, you complete me!

Now get back in the kitchen and make me some pie.

Warming Up

I hung out with Chevy Ray of FlashPunk fame the other night, and he shared with me some programmer wisdom. Which I now share with you: warm up!!


I always make it a point of mine to spend enough time warming up before physical activities. Whether it’s going for a run, dancing, rock-climbing…anything physical. I always make sure to spend the first few minutes doing some light but similar exercises, stretches etc. It literally serves to warm the joints and muscles, because they become more flexible and powerful a few degrees above body temperature, but it also serves to put me into a more self-forgiving mindframe. I suppose it’s because when I warm up it really drives home the point that I’m human, that I’m not perfect, and that it can take me a little while to get into the groove of something so that I can do my best work.


What Chevy pointed out to me was warm-ups can be made for mental activities as well. Some of the best teachers I know get their students into the right frame for class by throwing around some open brainstorming or opinion questions: mentally warming their students up for the subject at hand. And it’s just as useful to put this idea into practice for the dark art of computer programming. And I don’t know about you, but being more aware that I’m human, not perfect, and need to get into the groove of something goes a long way towards helping out that pesky programmers block :)


So today as I start my programming, I’ll be taking Chevy’s advice: start on a few small things, just change some colours around or tweak some variables, write a piece of code you’ve already written before, change some formatting or variable names. Just to warm up. Don’t get buried in changing little things, just get the mind thinking about programming before moving on to what needs to be done today.

They’re everywhere…

I’m pretty new to iPhone development.

I mean, I’ve only been using Objective-C as a language since January this year. Almost a decade of C++ experience before certainly helped me pick up a few concepts, but I’m certainly not about to go parading myself as a guru anytime soon. In fact, I still end up turning to Google on a daily basis for insight on whatever issue I’m having with Cocoa Touch that hour.

Something I can’t help but make notice of are the number of accepted solutions out there that are bad. Not wrong. Most every time I find a solution it does indeed solve the problem at hand. But not elegantly.  Not efficiently. Not well written, not well named, not well tested. And really, just not good code. The examples that spring to mind include C++ code that marked int and float parameters as const in a member function (these integral types are copied when pushed onto the stack for a function call – they’re the same size as a pointer anyway.  Marking them const isn’t const-nazi power, it’s just redundant) and some Objective-C code that didn’t bother naming the second parameter of a method (Obj-C has a quirky style of naming any parameters after the first one, leading to highly descriptive method names like

middle:@"is" suffix:@"SPARTA!"

Not using this convention shows a deep misunderstanding of the language).
Any new or learning programmer that finds a solution of this sort and then tries to copy-paste said code into their project will be met with a rather unpleasant surprise. Their code instantly becomes less consistent, less standardised and less maintainable. Let’s be honest though – they deserve it.

Yes, I bet you thought this post would be a rant about how “solutions never work when I copy-paste them into my DetailViewController.m”. Not so, and shame on you sir for thinking so! For you see, I like it that Google’s proferred solutions don’t work like cookie-cutter magic. For me, as I expect it is with most people, if all I do is copy and paste then I don’t learn anything. If the code works without me having to do anything then the next time I get faced with a similar challenge my first thought isn’t going to be “I’ve solved this before”, it’s going to be “where was that website that solved this for me?”. And frankly, those people I’ve seen that do try to copy-paste the code and then comment asking for it to be slightly modified to suit their needs usually deserve whatever issues they bring upon themselves. And maybe it will eventually help them: I know from my years of programming that one of the best motivators for writing good code is having to maintain bad code in another system.

So bring on the bad solutions – seriously. As a blogger I’m usually tempted to try and make the code I release here robust and reusable and free of coupling issues. I don’t plan to stop that – I like to think that keepiing these things in mind are indicative or where I’m at as a programmer, these are the worries I’ve earned and I’m not going to drop them so that someone else can learn from mistakes I shouldn’t have made. But I’m not going to spend extra time to make that code as pretty and useful as possible: it’s up to the person that wants to use it to get in there and change every line and variable name that they need to – and learn what the code actually does at the same time.

Distributed Version Control showdown

To quickly catch up those people why don’t keep up with the latest advances in programming technology, Distributed Version Control Systems (DVCS) are the latest craze, a version control where every working copy is a repository in it’s own right. When I first started learning about DVCS that explanation didn’t quite explain the point of it, so I’ll put it this way: you can commit as often as you want without having to be connected to a server, and everyone has a copy of the data so nothing is lost if the aircon stops working in the server room and the hard disks melt. These are good things :)

There are two major frontrunners in Distributed Versioning: git, lorded over by Linux creator Linus Torvalds to quell his hatred of SVN (and, according to him, give his name to another piece of software); and Mercurial, overseen by Matt Mackal as a way of thumbing it to the devs of the previously free BitKeeper.
Both are free and open source, originate from Linux platforms, and are traditionally command-line tools. Mercurial is the poor bastard that will take over from SVN once we have time to upgrade, while I use git quite a bit for my personal shi…projects, thanks to the pretty cool ProjectLocker.

For this article, I’m going to be comparing command-line hg & TortoiseHg – the Tortoise-branded GUI on top of Mercurial, with command-line git & the Git GUI included with msysgit on Windows. There is a Tortoise GUI for git as well, but to be frank it’s atrocious and I refuse to use it. Which I suppose leads me to the first point to compare: Interface.


On the command-line, both are pretty much the same. They have almost identical commands for day-to-day use (status, log, commit, push, pull, update etc). They both have quite good help on usage of each command. Mercurial maybe gets a slight edge because it only require typing two characters (hg) instead of three (git) before each command.

Graphically, git is a loser. It doesn’t feel particularly fair, because the msysgit GUI is supposed to be a usable fallback, not the preferred interface for git. Without any better options in sight though, I have to pick on it: the interface is clunky, there’s keyboard shortcuts listed in the menu that just don’t work, and the hunk selection is really badly done. This last point wouldn’t be an issue, except for that fact that git has a ‘staging area’ that seems to be mainly for doing this kind of work. More on this later, though. The repository GUI is decent, but doesn’t seem to offer any decent interface for merging files, which is what I would mainly use it for.

The Mercurial GUI is much more intuitive. The commit dialog includes quite a good way to commit individual ‘hunks’ of code instead of a whole file, as well as being somewhat easier than git to add new files/renamed files etc. Not without it’s shortfalls though, there’s no way to check/uncheck multiple files for a commit in a single batch – I have to go through and tick/untick every one. The Branch functionality confused me somewhat too, but once I had the hang of it I found it made sense. The repository GUI is quite good, and includes an interface for merging heads by updating to one head, then right-clicking on the sencond head. Both screens have useful links to the synchronisation GUI for pushing and pulling changes with other repositories.

Result: Git – Slightly burned French Fries floating in a Thickshake, Mercurial – A tasty chicken burger, with occasional gristle and annoyingly small seeds on the bun.

General Usage

I use a lot more features of Mercurial than git on a daily basis, because I work with multiple users in hg, but only by myself on git. So for now, I’ll limit ‘general usage’ to the edit-commit-pull-push cycle. There’s a fundamental difference between git and mercurial when it comes to commits: git has a ‘staging area’ between the working copy and the repository where the commit can be set up, hunks or lines added/removed, and then finally committed from. In theory it seems like a neat idea – in practice it ends up being an additional command to have to type/click whenever I’m adding new files in a commit. If I make any mistakes, git will allow the last commit to be ‘amended’ with any new changes, which is useful for adding or changing files, but not if I’d accidentally added some. Pulling and Pushing to synchronised servers work well, although the need to explicitly set the default branch by editing a text file is a turn-off.
Mercurial has no working queue, and instead commits directly to the repository. It’s quite possible and easy to select which parts of a file to commit in a particular change though, and it keeps me to a single command for everything. Any mistakes can be handled by ‘undoing the last commit’, which then throws all the committed files back into the working directory – effectively emulating git’s behaviour while also making it easy to take files out of the commit. Pulling and Pushing to other repositories is on par with git, but it’s a nice addition to have a GUI for setting this up.

Result: Mercurial – Teriyaki Chicken Sushi with Soy Sauce and Pickled Ginger, Git – Unexpectedly slimy Unagi with a tasty, if brief, Mochi dessert.

Advanced Use

Undoing/redoing multiple changesets, merging unrelated repositories, removing revisions etc. I’ll have to mainly cover this one theoretically, as I’ve only done a little of this. Mercurial comes with a ‘Mercurial Queues’ extension to allow most of this to happen, but it’s not all that intuitive and the GUI for using it is somewhat limited. With Tortoise-Hg it seems to work well for stripping revisions, but more complex work than that requires quite a bit of faffing about.
Git on the other hand has the rebase command. Which I’ve never used. But judging it purely on the forum posts and tutorials I’ve read, it seems to be the better of the two. The fact that someone took the effort to try to recreate the command in a mercurial extension seems to hint at that too.
Result: Git – Three layers of jelly with red beans, Mercurial – Delicious, floury Daifuku.

Branching and Merging

The very point of DVCS (other than not getting in trouble for Friday night Ninja-commits that don’t actually compile) is to give many more ways to deal with and resolve divergent changesets. In other words, branching and merging. I can sum this up with the following sentence: git does it more securely, mercurial does it more easily. As a programmer in a small team, easily is more important than securely for me – programmers calling branches the same name is not much of an issue in a small team. Mercurial also has a feature called named branches that has no equivalent in git – letting users see what branch a changeset was made on at any commit within the set.
Result: Mercurial – Muesli flakes with fresh raisins and strawberry yoghurt, Git – Coco Pops and Milk. And Diabetes.

The Verdict

Overall these are both really useful version control systems, each with their own advantages and disadvantages. For me though, Mercurial is the more delicious of the two: it combines a powerful command-line program with a GUI robust enough that I don’t need to use the command line most of the time.

Agile Musings

I’ve been writing my own code using Test Driven Development (TDD) for about 6 months now, and at the same time have found myself professionally working for two different non-games firms.  In this time I’ve had the opportunity to try using agile methodologies – and unit testing – in a few different environments: business through to casual.  I want to share some thoughts and musings from this time.

Firstly, tests are hard.  Plenty of people have covered this point in the blogosphere and beyond, but until I actually tried it just never truly sunk in how hard they are.  It’s one thing to be “informed” that “agile is difficult”, but it’s an altogether different experience when I’m staring blankly at my screen with absolutely no idea how to write a test for this functionality I want.  And those moments happen.  A lot.  And they are truly a struggle to get through, but I learn very important lessons once I figure them out.  And yes, it takes dedication and effort to figure them out.

Secondly, getting tests onto legacy code is even harder.  I have a secret obsession with Michael Feathers’ fantastic book Working With Legacy Code, which gives some plentiful ideas for how to add tests to code that was never designed with tests in mind.  Armed with these suggestions, some experience writing brand new code covered by tests, and the right environment, one can get tests on just about anything.  In a casual environment i.e. When I’m my own boss, I find it pretty easy to see the benefit for refactoring old code to make it testable (I’m easily convinced though) :).  Things get a bit iffy when the work is for someone else though.  In my experience there are often factors preventing the system going under test that have absolutely nothing to do with the code.  People can be lazy, or busy, or scared of tests.  Unless the whole team makes a commitment to getting agile, it’s probably not a good idea to spend hours finding seams and getting code under tests.

Finally, it’s worth it.  It’s damn worth it.  I’ve never written such beautiful or simple code as I do when I write tests first.  But code aesthetics is a nothing phrase to most managers and business men.  Luckily, along with writing nice code I find I write code that catches bugs more easily (at this stage each bug is usually a situation where I didn’t think to put a test case in), code that’s easier to maintain and understand (easy to tell what something does and how it works when a test is written for it) and my favourite: it increases my productivity (I make it a point to end each of my coding sessions on a failing test.  It’s lovely to just hit compile and see exactly what that next thing I need to do is.  Straight back into it!)

Half the battle is getting it working

Unit testing in XCode.  Using Git with ProjectLocker (or at all).  Using Eclipse for C++ development.  These and many other things had been lingering on my todo list far too long before they actually got done.  And basically this is because I tried them once, and it wasn’t a straightforward path to get them working.

When adopting a new tool, or trying out a new way of doing things, half the battle can turn out to be actually getting it working the way it’s supposed to.  Sometimes, such as when changing to Scrum or XP, learning 3D modelling, or trying to use any of Adobe’s top-shelf applications, this is because it’s hard to learn.  Old ideas or techniques or heuristics need to be thrown out and I have to try and learn from step one in a strange new world that looks eerily like the same computer I was just happily using Visual Studio on.

Some other times, however, it’s not so much a challenge as it is a bare-knuckles bar brawl with the ghost in the machine.  Unit testing with XCode, for example, threw up a random “unknown error” message that Google was powerless against.  It could only be fixed by completing removing, reinstalling and then updating XCode to the Snow Leopard exclusive version.  Eclipse seemed to gag on its own shirt shortly after the installation procedure finished, and a tutorial that was almost 3 versions behind didn’t help.  Eventually a friendly guru helped me to set it up, and I now have a half-dozen *nix-based command-line tools installed on my system that I’m only slowly learning about one at a time.

Git is a similar story.  Like Eclipse, it seems like a damn good idea in theory.  In practice, it requires setting up security keys, which requires ssh, which requires installing yet another half-dozen unknown nix-based command line tools into my windows with blurred lines regarding where the Windows prompt ends and the Nix-prompt begins.  In the end – and I thought I’d never say this – thank god I had a Mac.  With everything already set up in it’s distant-cousin-to-Unix OS, installing and setting up git finally become a “follow the tutorial” affair.  Which was all prompted by wanting the latest version of the fantastic App-Sales Mobile.

Anyway, rambling aside my message for this week is: sometimes trying something new is hard because it takes a refocus, sometimes it’s just not as easy to set up as possible.  If anyone reads this thing, feel free to leave your own comments about installs or set-ups that were way harder than they needed to be, and how you eventually conquered them.

Class Act Post-mortem

As promised, here’s the breakdown of what went right and what went wrong with the game Class Act: made by myself and 3 QANTM graduates during Game Jam Sydney.
Note that the entire game was conceptualized, designed and created over a very short (48hrs) time-frame; this alone makes it one of the most impressive projects I’ve been a part of. A lot of what I mention will be specific to how we tracked and used the time given to us, and may not be applicable to projects with less strictly enforced timeframes.

What went right:

The most fun section to answer!

  1. We finished it! Not every team involved in the jam managed to achieve that.
  2. We adopted an iterative development strategy. As the project came together we made an effort to force ourselves to find points where we could stop our work, play the latest version, then go into a separate room and decide what was missing and what was needed to make it closer to/a better game. It was sort of like planning sprints in Scrum, but without any formal management framework.
  3. Art. Our artist was experienced in 3D modelling and had never worked with pixel art or sprites at all. Which makes the graphics that he produced even more incredible. They were expertly crafted, pixel perfect, and had exactly the look and feel we were going for. Our artist single-handedly gave our game an 80’s reference, which was one of the requirements.

What went wrong:

Things to learn from

  1. Bad choice of tools. We ended up using what the majority of us were used to, which was Allegro: a 2D graphics library not far off from GDI. It did blitting and basic image loading and nothing else. Because of this we wasted a lot of time getting something on screen, and an epic amount of time writing our own movement, collision and animation code. We were trapped doing 2D because we weren’t all familiar with a single 3D framework and despite our artist doing some amazing pixel art, he would have been much happier making models. Because of the time taken to work with Allegro, we never even had time to polish.
  2. No polish. Yeah. Lack of time or code design for creating gameplay variables that could be tweaked to improve the gameplay. Little things like a visible view area for the teacher, or tweaking the reaction times of teachers/students would have upped the fun level if we’d had the time or the ability to tweak them in-game and find the perfect value. Bringing along a reusable console class for providing all that functionality would have made a lot of difference in the end product
  3. Time management. For all of us this was our first game jam event, and most of us had issues with managing our sleep and our work. Each of us had different issues: for some the sleep was ineffective, for others there wasn’t enough and they were fading during the last minute cram at the end. I don’t think it’s a common thing for many developers to know the limits of their bodies, or be familiar with their sleep cycles etc.; it’s the kind of knowledge that proves useful in a weekend game challenge though.

There was lots that went right, and lots of stuff to learn from, and I look forward to putting it all to good use as soon as the next Game Jam is announced!

For those interested in having a play, the files for Class Act can be downloaded from the Global Game Jam website here

Setting up cxxtest in Visual Studio 2008

Having decided on our testing system to use – cxxtest was the winner in the end – the next step was to set up a working template project structure so that new projects with tests included could be created quickly.

The examples for setting up cxxtest for Visual Studio included with the suite recommend using 3 different projects to get the tests running: one project to generate the tests, one to compile them, and one to run them.
This seemed like an inelegant solution to me, so I set about creating a better one, that would hopefully be more cross-platform compatible (as our build server will be an OSX machine).

The first step is to relegate all the application code (except the main function) to a static library.  I’ve been told this is good practice for writing code under test anyway – separate the code that can be tested from the code that can’t by placing all the StaticLibrarytested code into a separate library.  So far it sounds like an ideal, as there’s certainly code in the game library that I can’t put under test anyway (e.g. the Run() function).  It’s easy enough to create an empty static library project in Visual Studio.

The next step is to add the tests in.  Personally, when a project is small I like to keep the tests in the same project as the classes being tested.  That way the classes (ClassName.h, ClassName.cpp) are right next to the ClassName_Tests.h file.  So I include the test files in the static library project.  In a larger project, or if your personal preference is to keep the tests separate, it’s possible to put the test files into the Tests project instead.

Which brings us to the third step: a project to generate, compile and run the tests.  The project file itself starts as an empty console application, including the appropriate cxxtest directories.  Generating the tests is done using the testgen Python or Perl script included with cxxtest.  I run this as a command under pre-build events, easily translatable into a makefile when necessary:

python C:\cxxtest\ –-runner=ParenPrinter --output=TestRunner.cpp ../*_Tests.h

I prefer to store the tests project in a subfolder (Tests) away from my main source code, you’ll need to modify the commands to fit your  own directory structure.  The really cool bit is that you can use the wildcard to include all ttestgenhe headers ending with _Tests.h in your test runner.  Note: this essentially puts the code of every test into TestRunner.cpp.  It makes it quite a large file, but you should never have to modify it because it’s auto-generated.

Once the TestRunner.cpp has been generated, compiling the tests is pretty straightforward.  There’s no need to include your project’s source directory as an additional include, because cxxtest takes the relative location of the include files into account when generating the runner.

Finally, the tests are run as a Post-Build event with the command line


This is a macro directly to the TestRunner executable cretestrunated by the compile step.

And that’s a simple but effective template for including and automatically generating and running cxxtest-based tests in a Visual Studio 2008 project.  You may also want to include a project that implements its own main() function that then instances and calls the code you create in the static library.  This App project is simple to create, but needs to ensure that it is including and linking to the source and output of the static library.  Also, you need to make sure you set all your project’s dependencies up so that they compile in the right order.


For those who want to have a poke around with the template I’ve created, you can download it here: (includes the App project and an easily renameable static lib project)

Choosing a testing system

It’s a first step in starting to create software using Test-Driven Development – deciding on how you want to write the tests.  Having decided that I wanted to give TDD a good ol’ fashioned college try for my next projects (still somewhat under wraps), I found myself faced with a plethora of testing suites for C++ that were – no offense to the authors, effort and massive amount of time that obviously went into them all – not very good.

Being used to the simplicity of writing tests in C# using NUnit, I think I was a bit spoiled.  Having to derive classes from a test class, declare tests in a header then implement them in a source file, manually add tests to a runner, and implement my own main() function to create and run a test suite – and almost every testing system for C++ I looked at required me to do at least one of these – seems like a lot more effort and room for mistake than I wanted in my testing suite.

Noel Llopis gave an excellent run-down of available testing suites for C++ on his blog a while back – it’s a fantastic read and I recommend you look at it if you want a full discussion of the available options.  Having read the article myself I ended up needing to decide between only two suites: cxxtest, and UnitTest++.  The former ended up being the victor of the available suites in Noel’s blog – the latter was written by Noel to fill the absence of a suite he actually wanted to use.  Both implement the kind of easy creation of tests that I’m looking for.

They have their differences though: cxxtest uses Python (or Perl) to create the actual test code, parsing the header files defining the tests to create the implementation of the tests themselves.  This makes the tests easy to write (just a header file), but it requires having Python or Perl installed on the system and setting the project or makefile up to generate and run the tests.
UnitTest++ on the other hand uses macros: the tests are defined in a source file using the TEST(testName) macro, which contains enough magic to turn the test into legal C++ and automatically register it.  Organising tests into fixtures is a little different, as it’s designed to make it easy to create reusable variables without requiring accessors on them, which overall makes them quicker to use than the member variables in cxxtest.

In the end I’ve ended up going with cxxtest, despite the advantages offered by UnitTest++.  My main reasoning behind this is that cxxtest has been seen working on an OSX build server (like ours will be), and as far as I know UnitTest++ may not have been.  If I’m proved wrong, I may end up switching.  But for now, we’re going with cxxtest.

Geosphere generation

In one of the projects I’ve been working on recently, I needed to programmatically generate a geosphere primitive.Photo courtesy of Global Nerdy  I had some old almost-complete code for this purpose, but as I dug it up I discovered – like many of us do – that 3 years ago my coding style was horrible, and the code itself is cryptic.  

So instead of modifying my own code, I went online to try to find the solution I would have based the code upon (and then not placed in the comments *smack*).  But little luck, all my Google searches for geosphere algorithm, geodesic sphere generation and the like didn’t find me what I wanted.  So I eventually decrypted my own code, and I’ll post it here for fellow lost travellers.  I believe most of this comes from Paul Bourke, but I’m not sure of the origin of the icosahedron code.

The basic idea of a geosphere is quite simple: Take a 3D primitive, subdivide each of its faces over a number of repetitions to increase the resolution, then push every vertex out to the radius of the sphere you’re trying to approximate.  Almost any 3D primitive is good for the job, but not all will ensure the resulting faces are equal sizes.  I believe the best 2 primitives to use are either the Octahedron (8-sided) or Icosahedron (20-sided).  While they won’t result in exactly equal faces, the difference is so small it’s never really noticeable.  So, on to the code:

public Geosphere(float radius, Vector3 position, int depth)

This is all pretty straightforward.  Let’s look closer at each method:

private void GenerateIcosohedron()
//Calculate variables for algorithm
float piOnFive = (float)Math.PI / 5.0f;
float piOnTen = (float)Math.PI / 10.0f;

float unitRadius = 1.0f;

float insideExtent = (float)Math.Cos(piOnFive);
float sideLength = 2 * (float)Math.Sin(piOnFive);
float Cx = (float)Math.Cos(piOnTen);
float Cz = (float)Math.Sin(piOnTen);
float H1 = (float)Math.Sqrt((sideLength) * (sideLength) - unitRadius);
float H2 = (float)Math.Sqrt((insideExtent + unitRadius) * (insideExtent + unitRadius) - insideExtent * insideExtent);
float Y2 = 0.5f * (H2 - H1);
float Y1 = Y2 + H1;
float r = unitRadius;
float s = sideLength;
float h = insideExtent;

//create the icosahedron
Vertices = new List();
Vertices.Add(new Vector3(0, Y1, 0)); //a
Vertices.Add(new Vector3(0, Y2, r)); //b
Vertices.Add(new Vector3(Cx, Y2, Cz)); //c
Vertices.Add(new Vector3(0.5f * s, Y2, -h)); //d
Vertices.Add(new Vector3(-0.5f * s, Y2, -h)); //e
Vertices.Add(new Vector3(-Cx, Y2, Cz)); //f
Vertices.Add(new Vector3(0, -Y2, -r)); //g
Vertices.Add(new Vector3(-Cx, -Y2, -Cz)); //h
Vertices.Add(new Vector3(-0.5f * s, -Y2, h)); //i
Vertices.Add(new Vector3(0.5f * s, -Y2, h)); //j
Vertices.Add(new Vector3(Cx, -Y2, -Cz)); //k
Vertices.Add(new Vector3(0, -Y1, 0)); //l

//create the indices list
_IndexTriangles.Add(new IndexTriangle(0, 1, 2));
_IndexTriangles.Add(new IndexTriangle(0, 2, 3));
_IndexTriangles.Add(new IndexTriangle(0, 3, 4));
_IndexTriangles.Add(new IndexTriangle(0, 4, 5));
_IndexTriangles.Add(new IndexTriangle(0, 5, 1));
_IndexTriangles.Add(new IndexTriangle(1, 8, 9));
_IndexTriangles.Add(new IndexTriangle(9, 2, 1));
_IndexTriangles.Add(new IndexTriangle(2, 9, 10));
_IndexTriangles.Add(new IndexTriangle(10, 3, 2));
_IndexTriangles.Add(new IndexTriangle(3, 10, 6));
_IndexTriangles.Add(new IndexTriangle(6, 4, 3));
_IndexTriangles.Add(new IndexTriangle(4, 6, 7));
_IndexTriangles.Add(new IndexTriangle(7, 5, 4));
_IndexTriangles.Add(new IndexTriangle(5, 7, 8));
_IndexTriangles.Add(new IndexTriangle(8, 1, 5));
_IndexTriangles.Add(new IndexTriangle(11, 6, 10));
_IndexTriangles.Add(new IndexTriangle(11, 10, 9));
_IndexTriangles.Add(new IndexTriangle(11, 9, 8));
_IndexTriangles.Add(new IndexTriangle(11, 8, 7));
_IndexTriangles.Add(new IndexTriangle(11, 7, 6));

I wish I knew where I got this code from.  I believe the vertex generation stuff is from Paul Bourke, while I calculated the indices myself.  Note that I’m using an IndexTriangle class to store the indices.  This makes it easier to subdivide the triangles in the next step.  I’m also assuming the icosohedron has a unit radius (1.0f), because I’ll be pushing vertices out to the radius lateron.

private struct IndexTriangle
public IndexTriangle(int a, int b, int c)
this.a = a;
this.b = b;
this.c = c;
public int a;
public int b;
public int c;

The next step is to subdivide each triangle and repeat to a desired depth.  The subdivision I’m using splits each triangle into 4 smaller ones like so:


private void SubdivideToDepth(int depth)
for (int i = 0; i < depth; ++i)
List newIndexTriangles = new List();
foreach (IndexTriangle indexTriangle in _IndexTriangles)
Vector3 newVectorOne = Vector3.Lerp(Vertices[indexTriangle.a], Vertices[indexTriangle.b], 0.5f);
Vector3 newVectorTwo = Vector3.Lerp(Vertices[indexTriangle.b], Vertices[indexTriangle.c], 0.5f);
Vector3 newVectorThree = Vector3.Lerp(Vertices[indexTriangle.c], Vertices[indexTriangle.a], 0.5f);

IndexTriangle newTriOne = indexTriangle;
newTriOne.b = Vertices.IndexOf(newVectorOne);
newTriOne.c = Vertices.IndexOf(newVectorThree);
newIndexTriangles.Add(new IndexTriangle(Vertices.IndexOf(newVectorOne),
indexTriangle.b, Vertices.IndexOf(newVectorTwo)));
newIndexTriangles.Add(new IndexTriangle(Vertices.IndexOf(newVectorTwo),
indexTriangle.c, Vertices.IndexOf(newVectorThree)));
newIndexTriangles.Add(new IndexTriangle(Vertices.IndexOf(newVectorOne),
Vertices.IndexOf(newVectorTwo), Vertices.IndexOf(newVectorThree)));

_IndexTriangles = newIndexTriangles;

And finally, push all the created vertices out to the radius, so we end up with the approximation of a sphere

private void PushVerticesOutToRadius(float radius)
List newVertices = new List();
foreach (Vector3 vertex in Vertices)
float rootrad = (float)Math.Sqrt(vertex.X * vertex.X +
vertex.Y * vertex.Y +
vertex.Z * vertex.Z);
newVertices.Add(new Vector3(vertex.X * (radius / rootrad),
vertex.Y * (radius / rootrad),
vertex.Z * (radius / rootrad)));
Vertices = newVertices;

The code is looking pretty ugly posted up at the moment, hopefully I’ll have time to tidy it up tonight.