I haven’t updated here for a while – turns out working on two games has been eating up a huge amount of my time (go figure), but there’s something I’d like to talk about.

Recently a certain level of animosity towards the more female sex has been made apparent in the gaming world. There was an entirely unnecessary rage against Bioware writer Jennifer Kepler, an attention seeking bully in fighting game reality show Cross Assault, and now a torrent of what I can only describe as beyond vile harassment has been heaped upon Anita Sarkesian of Feminist Frequency for suggesting a series of videos researching the ways in which women have traditionally been portrayed in video games. Before I say any more, these videos are a really, really good idea and I can’t stress enough that you should donate to the Kickstarter to make them happen. Done? Okay, let’s get started.

 

First, an apology. This behaviour is happening in my industry, and coming from my audience. It is not okay, it is despicable and I’m ashamed I didn’t know how bad the problem was until now. Anita’s reaction impresses me and gives a fantastic example to anyone else of how to deal with this sort of behaviour. Having been subject to bullying and harassment myself – as I’m almost certain many people both working in the games industry and enjoying games today have been at some point in their lives – I’m well aware that the last thing that I want to do is tell someone else that it’s been happening. It’s an intense feeling of powerlessness, and it often feels like telling someone else about would somehow make it concrete and intolerable. However, that is and always will be the solution to this behaviour: make it public, ideally while it’s happening. Anita has done that, and made many people (myself included) aware of what a huge issue we’re dealing with here. Now that we know, we can help.

 

Next I want to offer something of an explanation. If we’re going to find a way to deal with this perception in our industry, it’s important to understand where it comes from. I’ve heard a lot of suggestions that it’s somehow related to the growth of games in popularity i.e. “now the arseholes are playing games too”. In small part, this is true – but there’s a pre-existing culture that has attracted said arseholes, and that comes from somewhere else I think: games were made by nerds, and nerds don’t have skills with women. Yes, this is a massive generalisation. Yes, in large part they are still made by nerds. I’m a nerd myself, and although I’m quite comfortable these days with my ability to attract women, that wasn’t always the case. I’ve had a long and frustrating history with women during my formative years: heartless rejection, unrequited love and just plain inexperience played a starring role in turning my interests towards computers, programming languages and virtual worlds. My unpleasant experiences weren’t limited to the fairer sex of course, but at 16 that did tend to be where my mind focused. All of these experiences ended up pushing me to where I am today – where my friends, job and ladies make me feel like the happiest duck in the world - but I can’t deny that I went through a period where my perspective had a flavour of misogyny, and whether I wanted it to or not I expect it would have come through in the things I made.

See, to me misogyny means any level of poor representation of women. Boob-heavy characters in Soul Calibur, or that new Catwoman comic cover are certainly examples. But a complete lack of representation counts as pretty poor on the scale also. Which is definitely something I notice a lot in early games. Sure, we were dealing in pixels, our characters were spaceships or hedgehogs because they were easier to believe when moved & animated so unrealistically. We were dealing in metaphors because we couldn’t get high-def enough to try and simulate something real. I get that. But all our metaphors either lacked women or had them as objects or worse, dicking the player around (Princess Peach continuously screwing over Mario, anyone?). Our metaphors said either “women don’t matter” or “women will hurt you”. And given the age that many developers were when making these games, given my own experiences around that age and given how similar those experiences are to so many other nerds at that stage of their lives, I understand where those messages come from. I don’t think they were good, but it wasn’t about sending a message – it was about finding our place and our people (nerds are some of the most social people I know when you get them around other nerds!). But these messages formed a part of the foundation on which the games industry has been built upon.

Many of us nerds have since met wonderful women who changed our lives, men who showed us how to really be a man, or shared other experiences to become more self-aware and to change our perspectives on these things. We realise the errors of our ways and move on, happy in our new maturity and assuming everyone else is doing the same. Until someone like Anita speaks up and it suddenly becomes apparent that some people have been carrying the wrong torch for a very long distance now. E3 this year was a big hint along those lines too – with innovation turning into a mainstay of the now flourishing indie community it became apparent that the vast majority of upcoming AAA titles are about men shooting other men, or punching sexy nuns in the face. These are our mass-marketed games. It’s worrying, and we need to do something about it.

 

And this is where I’d like to talk about Indie Games. Because this is where I think the solution comes from. This is where those wonderful nerds like myself, who turned our experiences around, or new and incredible nerds who somehow grew up unaffected can work their magic. What magic, you ask? Why, we get to revisit those metaphors. A retro-feeling pixel-arted platformer that teaches players to view their problem from different perspectives? Interesting! A classic-style sudden-death platformer where the main character metaphorically needs the girl (and her bandages) as his motivation for saving her? Curious! How about this: A re-imagining of Mario where time is inconstant, questions are asked that may not have answers, the same solution doesn’t always apply to the same problem, and finally the player is left wondering if they were saved the princess or were the monster all along?

The metaphors that abound in many indie games today are much more mature than they ever were in early games. The views are more healthy, more open and far more self-aware; and the greatest part is that once again we aren’t intentionally doing it. Just as I imagine many misogynistic views accidentally came through in early games, healthy and mature views are accidentally coming through in today’s indie games. Certainly I don’t pretend this generalises over all indie games – even the studio I work for doesn’t create that I would describe as having a good representation of women – but I optimistically see it as an emerging trend that will, as steadily larger and larger audiences flock towards the innovation in the indie games scene, become far more common and widespread throughout our audiences as well.

 

As an industry, we may have accidentally given out audiences the wrong message about women. For that, for the games industry, and for the people who received those messages, I sincerely apologize. Indie games is where we have a chance to do it again, and give a more mature message. And maybe, just maybe, it’s where the people who are part of this tide of hatred can learn some other ways to deal with their own lives and experiences.

 | Posted by | Categories: Blog |

Dropbox + Mercurial

9 January 2012

This is a summary of my personal version control system that I told a friend I’d write up a while ago. It’s basically a mutant combination of Mercurial and Dropbox. Each of these systems is somewhat specialised to be very good at approximately half of what I really want out of a versioning system, so the combination works out fairly well. There are some issues though, which I cover towards the end of the post.

 

What I like to have stored in version control pretty much comes down to this: it should be possible to build or run the game straight after checking out. So I like to version all of my source code and projects, along with all the required game assets, the latest build, and – if an artist is working on the project – any of the files they need to have on hand as well.

 

Mercurial is very close to my ideal version control. It’s distributed, which means I can check in changes with or without connection to a server, and easily branch and merge when I want to try out new things. Mercurial deals incredibly well with text files, and is able to deal with merges even when lines get moved or altered. This makes it perfect for versioning source code. Unfortunately it deals pretty badly with binary files like textures, compiled libraries, or executables.

 

Dropbox on the other hand, is ideally suited for binary files. It doesn’t track chunks of changes like Mercurial does, but instead just stores the last few versions of a specific file in full. It will keep track of any changed versions of a file for up to 30 days, although there’s an add-on you can buy which will keep track of them forever. For things like textures, libraries, executables or any other binary file, this is pretty much perfect. So to create the kind of versioning system I want, I create a Mercurial repository inside a Dropbox folder, giving me the best of both worlds. Dropbox will automatically synchronise both the binaries and the repository itself.

 

I imagine this would work perfectly for a single-user project, or even a single-programmer + artist project. I always prefer to have a continuous integration system running as well though. So that means keeping a separate working repository of unsafe/uncompiling code – where I do most of my work – and a repository of checked in code – which is the code used to create the latest build. This is easy enough: use the Dropbox folder as a remote repository and sync it to a different folder on my computer hard drive. Now I can check in half-working code as much as I want, and when I’m ready to trigger an automated build I push my changes into the Dropbox folder. The Dropbox folder on my working machine gets synchronised with the Dropbox folder on the build machine, the build machine notices changes in the repo and triggers a build. I keep binary files in certain subfolders of the Dropbox folder and set Mercurial up to ignore them. Then in my working folder I create symlinks to the Dropbox subfolders, letting me access them in buildscripts etc. as though they are actually subfolders in my working copy.

My working copy ends up looking like this:Working

While the Dropbox folder looks like this:Dropbox

 

For simplicity, I wrote build scripts (on both Windows and OSX) to create the symlinks as needed and to build, run tests and copy the latest executable into the bin folder. You can find some example scripts here.

 

Overall, it’s a decent system. The only issue I have with it is that sometimes I find builds getting triggered prematurely. In order to trigger a build, the following chain of syncs need to happen:

Working folder —(hg)—> My Dropbox folder —(dropbox)—> Build Dropbox Folder

This means the Mercurial repo is actually being synced by Dropbox, which can result in the repo thinking it’s changed and triggering a build while the Dropbox sync is only half completed. Mercurial does not deal well with half-modified repositories. It’s generally not a massive issue, and usually just results in a single build failing. I minimise it by checking the repo for modifications less often, but if it’s too problematic it would be possible to store the hg repo on a server like BitBucket and update the Dropbox repo every build.

 

Hope this post is helpful for someone.

 | Posted by | Categories: Blog |

About Time

6 September 2011

melting-clockTime manipulation is a common theme in games. Ever since The Matrix popularised the slow mo dive (and let’s be honest, John Woo has already beat them to that idea) games abound that slow down, pause, speed up or otherwise modify time within the game.

 

The main ones that come to mind for me are those like Max Payne or Braid, which use time as a specific game mechanic. But almost every other game – especially action games – control time to accentuate actions: after a crippling punch to an enemy in Deus Ex: Human Revolution, after the killing stab in Assassins Creed, after a Spider-sense avoidance move in…well…any Spider-man game ever. Slowing the action down just for a moment can give the player a real sense of the damage they’re causing, or the misfortune they’ve just avoided. It gives them an extended moment to savour some part of the gameplay. Time manipulation used as a spice like this is something I’m very much a fan of, and really creates a feel of sublime polish in a game.

 

On the other hand, there’s a tendency to overuse time control like this. Especially in art-centric games – like any made by the fantastic studio I’m working for – there is often a tendency to want to “slow down, truck in, and play a cool animation”. It’s easy to think that a slow-down will be more enjoyable if the player can get a better and longer view of the action. And when the animations to be shown are as awesome as the ones the artists at Klei tend to come up with, it’s tempting to want to get a better look at them.

 

But in my opinion, this is no longer paying regard to the fact that the game is interactive. It always annoys me when cutscenes take away control of my character, but the change of view and fullscreen nature of them tend to give me the hint that I can at least use this time to mash the keys trying to find the skip button. But when I can see my avatar and I can’t move him, it frustrates me beyond all belief. And yes, being stuck in a slow-motion, zoomed-in animated sequence has taken away my control. Controlling time to show things off works best when used sparingly, and even then only very quickly. Deus Ex or AssCreed don’t slow down an entire animation – just certain bits for almost imperceptible amounts of time to accentuate actions.

 

However, the real reason I started writing this blog post is to praise another game that handles time control incredibly well: creating the ‘cool factor’ of big hits and slow-down without getting in the way of the player. That game is Batman: Arkham Asylum.Batman-Arkham-Asylum-Impressions1

It’s easy to overlook at first, but during combat the slow-motion happens before any hits are landed. And it’s related to how close the goons are to hitting Batman first. If they haven’t even considered throwing a punch, then no slow-down happens at all and they get a full-speed knuckle sandwich. But if both the goon and Batman are just about to land a hit on each other at the same time, the game slows down until I’m expecting “Eye of the Tiger” to start playing. And as a player, that feels amazing!

 

The way that the slow-motion is affected not by my actions, or the animations that are playing, but by the context in which I perform my actions and animations just feels right, and it allows players to craft wildly different experiences and playing styles. Watching one of my friends play, I realised he was somewhat more of a guerilla fighter, throwing off a single hit before moving on to the next goon – his fights seemed so fast to me, under his guidance Batman flew from baddy to baddy, only slowing down occasionally to knock them out, or when the slow-motion would kick in by chance. It made such a difference to my own style: trying to get as many hits in on one guy before I was forced to split my attention to deal with an incoming punch, a style that gave me a lot of slow-motion moments where I would sit on the edge of my seat, hoping I had thrown the punch fast enough to stop the goon from hitting me. And when I realised how much variation the developers at Rockstar had managed to add by such a simple but well-placed use of time manipulation, suffice to say I was impressed.

 

How do you use time in your games? Is it a garnish, an annoyance, a selling point, or a core gameplay element? What other games have time mechanics that you find interesting? Say something in the comments.

 | Posted by | Categories: Blog |

Vector initialisation

25 May 2011

Oftentimes while doing this crazy business we refer to as “programming”, I find myself with the want to create lists of stuff. Not just any lists of stuff though, lists that I can set in code that never needs to change. A static array of stuff, if you will.

Luckily, I live in the day and age where C++ happens to be backwards-compatible with C, which has a way of setting an array to just such a list of stuff: static array initialisation.

int array[] = 
{
	1,
	2,
	3,
	4,
	56,
};

And that’s all well and good if what I want to use is an array of stuff. But what if what I really want to use is a solution grounded in the standard C++ libraries? What if I want to have a vector that starts out with certain contents that I can then manipulate as my twisted and purile mind sees fit? What if I wanted to use C++ classes that were more than just structs or Plain Old Data Structures (PODS)? Why, then I would need a way to initialise a vector! What’s that? There isn’t one?

 

Okay, I speak too soon. C++ 0x has/will have one. I never know whether to speak in the future or present tense with that one, as it’s technically not really ratified yet but a lot of people are getting sick of waiting. Anyways, with C++ more-shiny-version the syntax is simple and delicious:

 

//declare a class that takes an initializer list in a constructor
class ShinyClass
{
public:
	ShinyClass(std::initializer_list<int> list);
};

//use the list
void main()
{
	ShinyClass array = 
	{
		1,
		2,
		3,
		44,
		56,
	};
}

 

 

However, for those poor souls who, like myself, are stuck with the version of C++ that actually has an official standard and universal compiler support, this snippet is nothing more than a tease. Generally, the closest we can get is to first initialise an array and then copy the contents in to a vector like so:

const int array[] = {1, 2, 33, 46, 55};
const std::vector<const int> vec(array, array+sizeof(array)*sizeof(int) );

This is quick, but it doesn’t really work with anything that isn’t a struct or a PODS, and doesn’t play all that nice with constructors. There’s a Boost alternative, Boost.Assign, that can be used for initializing vectors. Frankly though, Boost just adds so much compile time and bloat to anything that uses it that I’m not even going to show a snippet of it. Feel free to look it up yourself if you’re interested.

 

The final solution is one which I rolled myself after seeing a similar (but more evil one) on a Stack Overflow question.

template <typename T>
class vector_init
{
public:
    vector_init(const T& val)
    {
        vec.push_back(val);
    }
    inline vector_init& operator()(T val)
    {
        vec.push_back(val);
        return *this;
    }
    inline std::vector<T> end()
    {
        return vec;
    }
private:
    std::vector<T> vec;
};

 

Use it like so:

std::vector<int> testVec = vector_init<int>(1)(2)(3)(4)(5).end();

It won’t relieve all your initialization worries, but it will certainly help with calling classes with specific C++ functions and filling vectors with pre-calculated content. Enjoy!

 | Posted by | Categories: Blog |

Tricky Vector3

8 May 2011

Anyone who’s done any work in 3D is familiar with some sort of Vector class. I’m not talking about STL’s std::vector, which is used more as a list, no I’m talking about specialised maths classes for the purpose of doing linear algebra: Vector2, Vector3, Vector4 and their 2D counterparts Matrix2x3, Matrix3x3 and Matrix4x4. Vector3 and Vector4 I tend to use for colours as well, where x, y, z, w maps to r, g, b, a. But I’ve always wanted my Vector classes to be able to use either of those names, and be able to use the Vector as an array of values.

i.e.:

Vector3 position;

position.X = 1.0f;

 

Vector3 color;

color.R = 1.0f;

 

Vector3 something;

something[0] = 1.0f;

are all valid code.

 

The way I’d been doing it before was using an anonymous union and an anonymous struct:

 

struct Vector3

{

    union

    {

        struct {float X; float Y; float Z;}

        struct {float R; float G; float B;}

        float array[3];

    };

    const float& operator[](size_t i) const { return array[i]; }

    float& operator[](size_t i) { return array[i]; }

};

And this did what I needed it to, but with an annoying side effect: I kept getting compiler warnings about my anonymous structs.

 

Anonymous structs are not a part of the C++ standard. The reason why this is so evades me, but it seems the popular compilers – Visual Studio and gcc – implemented extensions for supporting them a while ago. However, they still throw that warning at me, and I’m a “warnings as errors” kind of guy, so I determined to find a fix for this. Eventually.

 

Recently I stumbled upon a post on the GameDev forums which described how to do such a thing, and after staring at the code until it felt like my eyes were bleeding I came to understand it. So here is a standards-compliant Vector class:

 

struct Vector3

{

private:

    typedef float Vector3::* const locations[3];

    static const locations v;

public:

    union { float X; float R; };

    union { float Y; float G; };

    union { float Z; float B; };

    

    const float& operator[](size_t i) const { return this->*v[i]; }

    float& operator[](size_t i) { return this->*v[i]; }

};

const Vector3::locations Vector3::v = { &Vector3::X, &Vector3::Y, &Vector3::Z };

 

Confused? I know I was. Here’s an explanation:

The Vector3::* type is a special type called a pointer-to-member. I find it useful to think of it as an offset into any class of type Vector3 (e.g. the value for v[0] is found 2-bytes from the start of the ‘this’ pointer, although that’s apparently not technically true. The pointer-to-member operator ->* is used to access the value of the pointer-to-member in a specific instance of the class.

So in the Vector3 class there’s a constant static list of 3 pointer-to-members that let us overload the [] operator and treat X, Y and Z as if they were arrays. And the great thing is, there’s almost no memory overhead for it: a good compiler will recognise the const allocations and optimise the pointers away completely.

 

The anonymous unions allow you to refer to each component by either name (and them both being the same type removes any weirdness that might happen from assigning to one and then the other), and C++ doesn’t allow compilers to re-order or place extra bytes between members for alignment so it’s safe to treat <code>&X</code> as the start of a float array of length 3.

 

And that’s my new Vector3 class. For fun, here’s a templated version

template <typename T>

struct Vector3_t

{

private:

    typedef T Vector3_t<T>::* const locations[3];

    static const locations loc;

public:

    union { T X; T R; };

    union { T Y; T G; };

    union { T Z; T B; };

    const T& operator[](size_t i) const { return this->*loc[i]; }

    T& operator[](size_t i) { return this->*loc[i]; }

};

template<typename T>

const typename Vector3_t<T>::locations Vector3_t<T>::loc = { &Vector3_t<T>::X, &Vector3_t<T>::Y, &Vector3_t<T>::Z };

 

typedef Vector3_t<float> Vector3;

see here for an explanation of that second typename in the array initialisation.

 | Posted by | Categories: Blog |

Logging is one of those things I don’t want to implement until I get an error somewhere and I’d rather not try to track it down with a debugger. This happens very quickly when dealing with XCode’s included and rather lacking debugger, or on Android’s command-line-only native code debugger.

So, for my and public domain usage: A simple logging macro


#ifndef REMOVE_LOGGING
#ifdef ANDROID
#define LOG_DEBUG(message, ...) \
    do { __android_log_print(ANDROID_LOG_DEBUG, "Terrasweeper", "[%s:%d] " message, __FILE__, __LINE__, ##__VA_ARGS__); } while (0)
#else
#define LOG_DEBUG(message, ...) \
    do { std::printf("[%s:%d] " message "\n", __FILE__, __LINE__,  ##__VA_ARGS__); } while(0)
#endif
#else
#define LOG_DEBUG(message, ...) do { LOG_UNUSED(message); } while (0)
#endif


As you can see, it supports Windows, Unix and iPhone through stdout, and android through the custom logging that you can retrieve with the LogCat tool

 | Posted by | Categories: Blog |

These are the steps I took to get TortoiseHg working on OSX (Snow Leopard), using pygtk and the hgtk script included with the TortoiseHg installation.

  1. Download TortoiseHg. The Windows exe or msi files won’t work, so you’ll need to follow the links to download from source, or go straight here and get the latest version. Extract it somewhere memorable (I went with /tortoisehg/) and take note of the path to the hgtk script (for me it was /tortoisehg/hgtk)
  2. Make sure X11 and XCode are installed. If you’ve got a developer setup they probably are already, otherwise you can install X11 from the Snow Leopard disk and download XCode from Apple. X11 and the tools included with XCode are used to display the TortoiseHg GUIs. If you have XCode and the X11.app in Applications you’re good to go.
  3. Download and install MacPorts. The Snow Leopard package is what I used. MacPorts will let us install pygtk, which is needed to run the TortoiseHg GUIs from Python. It also turns out to be really helpful for installing everything else.
  4. Install pygtk. You’ll need to open up a Terminal window and use MacPorts for this:
    sudo port install py26-gtk
    There’ll be a bunch of dependencies that will download and build, this took a few hours on my Mac Mini, and a whole day on my virtual mac.
  5. At this point, I had issues with Python. This may not be the case for you, but when I typed “python” at the command line it was launching the Apple version of python that couldn’t import pygtk, instead of the MacTools version. Try loading up Python and typing ‘import pygtk’ then pressing ENTER to see if it works. If there’s an error message you may need to change your Python version – I used python_select to do this, which I installed using MacPorts:
    sudo port install python_select
    and then
    python_select –l
    to see what versions you have. Pick the one that’s not Apple with e.g.
    sudo python_select python26
    (note the sudo: you may not have permissions if you try the command without it)
  6. Install mercurial. I’m putting this as step 6 because if you do it before you’ve selected the Python version you can end up installing it to the wrong Python. Just grab the OSX 10.6 version and install it from the website, or I’ve found using MacPorts works just fine too
    sudo port install mercurial
  7. Install iniparse. This is needed to get the settings working properly for tortoisehg. You can use the website link, or MacPorts again
    sudo port install py26-iniparse
  8. /path/to/hgtk should work now, but it will only display a list of options and then quit out. Create a symlink of hgtk in one of your PATH directories (I went with /usr/bin/ but use “env $PATH” at the command line to find one that works for you)
    sudo ln –s /path/to/hgtk /usr/bin/
  9. Find yourself a Mercurial repository, navigate to it in the Terminal and type hgtk commit. If you get some warnings about RANDR just ignore them, it’ll still work. TADA!!! There’s our favourite commit window!

Okay, now just make sure your username is properly set up in both the global and repository settings, and you’re good to go.

Now get back to work Smile

 | Posted by | Categories: Blog | Tagged: , , , , , , , , |

This week I spent an inordinate amount of time trying to get TortoiseHg – the Tortoise-branded interface to the Mercurial version system – compiling and running on my Mac Mini so I could finally replace git as my VCS. My next post will be a howto for getting TortoiseHg running on Snow Leopard OSX, but first: a reason.

hg-love

I’ve tried a lot of Version Control systems in my 8 years as a programmer: none, CVS, SVN, SourceSafe, Vault, Git and Mercurial.  I’ve noticed over time that they’ve gotten less and less…hmm, how do I put this…annoying. Every source control system gets in the way somehow, that’s the trade off: you have to break flow to commit stuff, but if you stuff-up there’s a backup around. And that break of flow has (mostly) gotten less annoying as I’ve moved to new and improved systems. CVS was command-line, SVN had a GUI, Vault has a nice inline diff, Git is…okay, git is command line again. Technically it comes with a GUI but in the same way you’d say that Halo: Reach comes with annoying douchebags online – it’s more of an annoyance than a feature. But the TortoiseHg commit screen is the best thing I’ve ever had for a versioning system, and one that finally fits into my workflow. Sorry other version control commit windows: I have a NEW girlfriend now!


Like the polish on an indie game, it’s small things that add up to make a big difference with the commit screen. The changed files are all shown on one side of the screen, and an inline diff is shown on the other side, making it very quick to see what’s changed when I’m writing comments for the commit. I can easily switch that to ‘hunk selection’ and choose only to commit certain parts of the file that have changed, and then the rest in the next commit. This is GOLD when I feel like I’ve missed a commit and want to make different commits for the separate functionalities I’ve just implemented. There’s easy access to any of the other tools from the menus (repo explorer, push and pull synchronisation etc.), and lots of ways to filter the list of changed files. And then I can right-click on a file and automatically add it to the .hgignore file, see it’s history, copy it, rename it, edit it. I truly love a commit screen that lets me do the little changes I need to get that commit right without having to go back into my project, stop whatever I was in the middle of and make the changes there.


But the most important thing is that the screen stays open. Nothing annoys me more than committing a file or two, but not all of them, and then having to go back through TortoiseSVN’s right-click menu or git’s command line in order to get the commit window back. And to commit, there’s a nice big button but even better a Ctrl+Enter straight after typing a comment will do the commit for me.


Thanks to TortoiseHg, my version control really feels like a part of my workflow now. It’s not perfect (an ability to select more than one file at once is sorely missing), but it’s better than anything I’ve used before.

TortoiseHg commit window, you complete me!


Now get back in the kitchen and make me some pie.

 | Posted by | Categories: Blog | Tagged: , , , , , , |

Warming Up

23 October 2010

I hung out with Chevy Ray of FlashPunk fame the other night, and he shared with me some programmer wisdom. Which I now share with you: warm up!!

 

I always make it a point of mine to spend enough time warming up before physical activities. Whether it’s going for a run, dancing, rock-climbing…anything physical. I always make sure to spend the first few minutes doing some light but similar exercises, stretches etc. It literally serves to warm the joints and muscles, because they become more flexible and powerful a few degrees above body temperature, but it also serves to put me into a more self-forgiving mindframe. I suppose it’s because when I warm up it really drives home the point that I’m human, that I’m not perfect, and that it can take me a little while to get into the groove of something so that I can do my best work.

 

What Chevy pointed out to me was warm-ups can be made for mental activities as well. Some of the best teachers I know get their students into the right frame for class by throwing around some open brainstorming or opinion questions: mentally warming their students up for the subject at hand. And it’s just as useful to put this idea into practice for the dark art of computer programming. And I don’t know about you, but being more aware that I’m human, not perfect, and need to get into the groove of something goes a long way towards helping out that pesky programmers block :)

 

So today as I start my programming, I’ll be taking Chevy’s advice: start on a few small things, just change some colours around or tweak some variables, write a piece of code you’ve already written before, change some formatting or variable names. Just to warm up. Don’t get buried in changing little things, just get the mind thinking about programming before moving on to what needs to be done today.

I’ve been dealing with some issues with Unicode at the moment, and trawling the internets looking for answers has revealed to me just how many people don’t seem to have a comprehension of what’s really going on in the string classes they’re using. I was one of them.

 

The first thing you need to read – before going any further – is Joel Spotsky’s blog on this:

The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets

If you haven’t read that, go there now. Right now. I’m not kidding, stop reading this and go read that. Come back afterwards. Nowait, don’t start reading more Joel Spotsky – come back! Oh, there you are.

 

Okay, so that was a simplified discussion about the history of text encodings and what it means to most developers. And it ended with “might as well use std::wstring, it’s native”. Well, here’s the kicker: not necessarily.

 

C++ is fantastic because it’s cross-platform. It can compile for Windows, Unix, OSX, iPhone, Android, DS, Playstation – you name it, if it supports native code there’s probably a C++ compiler for it. Unfortunately, not every C++ compiler treats everything the same. There’s a lot of ‘holes’ in the specification that let the compiler decide what it wants to do with certain data types, how big it wants to make them (e.g. on a 32-bit processor, the int datatype is 32 bits. On a 64-bit processor? You guessed it, 64. There needs to be room for compilers to set the size that works best on the native hardware and operating system. But this leads to issues when you want one piece of code that compiles to all platforms.

 

On Windows a std::string uses a single byte, which essentially means it can only support the first part of a UTF-8 encoding. On Unix a std::string is 2-bytes, and can happily use a UTF-16 or UCS-2 encoding. On Windows you need to use a std::wstring (wide-string) to have any Unicode-based foreign language support. On Unix, a wide-string uses 4-bytes, which can lead to strings being memory hogs and requiring conversion to and from the strings used natively by the operating system.

 

There are basically three ways around this:

  • use the time-honoured tradition of selective #defines and macros to compile using std::string for some systems, std::wstring for others etc. The Microsoft header <tchar.h> does something similar (using char or wchar_t arrays instead of STL classes)
  • define your own basic_string<uint16_t> template or similar that always uses the same size for a character. String literals become harder to use then, and need a macro-hack to work properly. C++0x (if it ever gets ratified), will introduce new Unicode support to make this method much easier. Interoperability with other APIs or libraries can easily lead to issues though.
  • go ahead and do what Joel Spotsky says: just use std::wstring and allow it to be UTF-16/UCS-2 on one system and UTF-32 on another. So long as you want your game running on a system and not sending text between systems/servers etc. this should be fine.

Personally I prefer the second option. I use string literals a lot less than I use everything else, and when writing Objective-C I have to prefix my string literals with an @ anyway, so wrapping them in a macro isn’t an issue for me. Plus it makes it easier to move forward with the next version of C++. Until that gets released though, hopefully you now know enough about Unicode to decide how to go about supporting (or not supporting) it in your next project.

 | Posted by | Categories: Blog |