I play a lot of Android: Netrunner at work; other board games, too, but Netrunner is the one that’s sunk its teeth into me most deeply. I mostly play over lunch, but sometimes I play at other times, and occasionally those lunches get pretty long; this makes me wonder: is there any way I can justify this in terms of improving my work? (At least directly: it increases my happiness and gives me time to chat with coworkers about whatever’s on top of our mind, both of which have indirect benefits.)

I can’t honestly answer “yes” to that question; but I also don’t think it’s a coincidence that I and several of my coworkers are addicted to the game. We work at a company that’s building a big piece of distributed software; doing a good job of that requires systems thinking, and Netrunner is, in its own way, does an excellent job of guiding you down a systems thinking path.


The game presents a rule set to you. Like all good games, that rule set’s parts interlock in ways that are both obvious and non-obvious. It presents you with some top-level goals, goal that are concrete enough that you can imagine a set of micro actions that will lead to those goals; but those goals are distant enough that you can’t predict the details of the route from the beginning, and you should be prepared for even the broad strokes of your prediction to be falsified by the cards you are dealt, by your opponent’s actions.

And, as you play the game more, you realize that there are many more interlocking possibilities than are apparent at first glance. Possibilities that are latent in the basic rules; possibilities that are in the cards that come in the core set; further possibilities that are either made possible or made more noticeable by the cards in the expansions; possibilities that unfold as you watch what your opponent does on the board.

The genius of Netrunner isn’t just in the possibilities in the game, however: it’s also in how the game actively helps nourish your development of understanding of what’s possible. Right from the beginning, Netrunner gives you two layers of archetypes: the Corporation / Runner distinction, and the multiple factions present on each side. So you get encouraged to look a bit harder at one subset of the possibility space, but of course this is in no sense a permanent choice, you’ll probably play the other side or a different faction in a different match. (And you’ll get to watch how your opponent plays their faction, too.) And this focusing continues with the expansions: aside from the general possibilities unlocked by new cards to play, each expansion gives you a couple of new identity cards to use with existing factions, giving you new ideas for an overall direction to explore in, and the expansions come in collections that are generally presenting a broad theme.


So: Netrunner is pretty awesome, both in the game itself and in its didactic nature. And there are a lot of analogies one can make between Netrunner and programming: a large possibility space, lots of good routes forward depending on your taste and your context, there’s always more exploration possible, there’s a source of randomness waiting to surprise you, the journey itself is its own reward. (I totally agree with Nels when he said on his podcast that Netrunner games are great even when you lose! Though, don’t get me wrong, I do prefer it when my software works.)

Being a game, though, Netrunner‘s rules are explicit: so you know the broad contours of the framework that you’re working within. This is fuzzier in programming: programming language semantics are generally basically well-defined (and, while compiler bugs exist, in practice they’re rare), but how your software will behave in the real world is not so clear. The details of how software will perform and how independent components will interact aren’t so clear (both of which are key aspects of dealing with distributed systems!), but more importantly: you’re writing software for a reason, and it’s hard to tell whether the software will satisfy your underlying goals until you’ve written it. And, for that matter, those underlying reasons themselves are fuzzy: one of the points of agile software development is that having working software is a huge help towards clarifying and making concrete the goals that you’re striving towards with that software.

Furthermore, software itself isn’t written in a vacuum: it’s written by people working together, with many different perspectives, strengths, desires. The ground rules there are vaguer still; one of the things I’ve learned about myself over the last few years (thanks in part to board games!) is that I do like to have explicit rules to work with in that sort of context as well. Not unchangeable rules, not rules covering everything, but some sort of acknowledgment of the ground rules we’re playing by and where the boundaries of those ground rules are. (C.f. my post a couple of months back on benefit zero of retrospectives.)

The existence of ground rules is important enough to me that, if rules aren’t explicit, my brain will go off and try to make sense of the patterns it sees around itself, coming up with implicit rules to explain those patterns. Not that there’s anything unique in me about that: humans are hard-wired to try to make sense of social contexts, I think. And that, in turn opens up a whole maze of problems: in particular, when different people are working under different notions of the ground rules, unhappiness can easily arise.

This also brings out behavior of my own that I’m not proud of. Because a lot of the rules that I come up put me in a pretty cynical place; honestly, I don’t need much help with that, I’m quite cynical (and snarky) enough as is! Not that that cynicism isn’t justified in a lot of contexts: for example, I’ve seen enough evidence that the tech startup world is hostile to women that I don’t want to be a Pollyanna about that one. But still: it’s important to realize that, when I’m doing or thinking something based on my mental model of a situation, there’s a pretty significant (and generally not explicit) chain of assumptions underlying that model, and I generally haven’t tested those assumptions nearly enough for me to be confident personally that they or the resulting model are valid, let alone done the work to check whether or not other people agree. And, given that, smug snarky cynicism is probably not the best approach.


Sigh. It’s not like we can talk everything out, though, and it gets harder as groups get larger. I guess I’ll end by quoting the end of that “benefit zero of retrospectives” post:

there’s another benefit lurking in the assumptions that are prerequisites for a retrospective: the fact that you have a team at all, and that the team members’ thoughts are all worth listening to.

Post Revisions: