[ Content | Sidebar ]

false equivalence and maintenance of privilege

August 27th, 2014

On Sunday, the New York Times wrote an article about Michael Brown saying, among other things:

Michael Brown, 18, due to be buried on Monday, was no angel, with public records and interviews with friends and family revealing both problems and promise in his young life. Shortly before his encounter with Officer Wilson, the police say he was caught on a security camera stealing a box of cigars, pushing the clerk of a convenience store into a display case. He lived in a community that had rough patches, and he dabbled in drugs and alcohol. He had taken to rapping in recent months, producing lyrics that were by turns contemplative and vulgar. He got into at least one scuffle with a neighbor.

Which is a vile paragraph for the Times to have published. I can only imagine that what was going through the author and editor’s heads was a desire to appear balanced: the next paragraph said

At the same time, he regularly flashed a broad smile that endeared those around him. He overcame early struggles in school to graduate on time. He was pointed toward a trade college and a career and, his parents hoped, toward a successful life.

But this sort of “balance” is singularly inappropriate: there’s a good reason why we use the term “eulogy”, coming from a phrase meaning “speak well”, to refer to writings about people who have just died. Situations where it’s appropriate to break that rule are few and far between; talking about a kid who was just the victim of police murder (followed by weeks of terrorism committed by multiple levels of police, no less) is not one of those exceptions.

This is a classic case of false equivalence: when writing a story about a politically touchy issue, the media likes to find two sides of the issue to present, and to present those sides without any sort of context that might cause one to evaluate one side more favorably than the other. It doesn’t matter if one of those sides is supported by essentially all experts on the subject while the other is only supported by loons or guns-for-hire; it doesn’t matter if one of those sides is engaging in behavior squarely within our political norms while the other side is doing historically unprecedented attacks on the very concept of majority rule; and, as here, it doesn’t matter if one of those sides is behaving with simple compassion while the other side is lacking even a shred of simple human decency. False equivalence demands both.


Our dominant non-right-wing media outlets are beyond hope on false equivalence in the political arena. In more personal stories, they aren’t in general, but of course we know what’s going on here. Michael Brown was black, and my country’s paper of record knows what sort of story it’s expected to tell in that scenario, what sort of images to invoke. I of course have know way of knowing for sure, but my assumption is that this choice of the type of wording for the article wasn’t even an active choice by the author: that’s just what came out of their subconscious.

And, whether conscious or subconscious, that choice is very well grounded indeed. See, for example, this analysis of whom the Times labels as “no angel”: you basically either have to have done horrific crimes or be black. Or, for another example, see this comparison of the above story about Michael Brown with one on one of the Boston bombers; the Boston bomber story isn’t from the New York Times, but I don’t think that weakens the power of this comparison.

That’s part of the evil of privilege: it’s so pervasive that even our subconsciously constructed phrasing works to actively maintain it, to bolster it so strongly that just trying to get to a position where we can start looking behind that privilege is exhausting. But that’s only one part of that evil: there’s plenty of active maintenance of privilege out there, of people actively and intentionally using it to help themselves and to harm, even kill, others, as Ferguson has given us endless evidence of.


Last week, indie game developer Zoe Quinn was a target of focused and sustained harassment. Harassment which is still ongoing for her, and which has spread more broadly. That led to many reactions, including one from Kotaku that contained the following paragraph:

We’ve long been wary of the potential undue influence of corporate gaming on games reporting, and we’ve taken many actions to guard against it. The last week has been, if nothing else, a good warning to all of us about the pitfalls of cliquishness in the indie dev scene and among the reporters who cover it. We’ve absorbed those lessons and assure you that, moving ahead, we’ll err on the side of consistent transparency on that front, too.

Yes, the last week has shown one downside of “clicquishness in the indie dev scene”: if you’re friends with a bunch of other indie devs, and if 4channers hack one developer’s account, then you have to start being very careful about what links you click on in Skype messages. That downside, however, is not what Kotaku was talking about.

To be honest, I’m not entirely sure what Kotaku was talking about; that whole post of theirs made very little sense to me. But I suspect that it’s another example of the false equivalence trap that I talked about: this is a big story that some people would like to present as a controversy, so Kotaku felt that it needed to take both sides seriously, and use the controversy as a way to present themselves in a good light by doing some introspection.


Again, though, it’s not just false equivalence here: it’s privilege that’s shaping every part of the discussion. They didn’t do just any sort of introspection here: for example, they didn’t do introspection prompted by interactions between AAA game developers that provide direct funding to Kotaku and that have overwhelmingly male leadership, employees, and audiences, or introspection prompted by male editors hiring and publishing more and more men from their circle of friends. (I actually suspect Kotaku has been introspective about the latter in the past, admittedly, that’s the only reason why I visit their site at all.) Instead, they decided that the appropriate target for their introspection was their interactions with a platform that leads to donations of single-digit numbers of dollars to marginalized voices, with those donations leading to no significant gifts in return.

So here, too, we see privilege working at a subconscious level (at least I’d like to hope it’s subconscious!) so that just fighting to reach a level playing field is exhausting. Which would be bad enough if it were just subconscious, but in this battle as well, we see much more terrifying active attacks from people trying to maintain their dominance: trigger warning, but here’s a sample of what Anita Sarkeesian is seeing.


Fuck all of this. Can haz revolution plz?

apple tv business model

August 24th, 2014

It’s getting to Apple product announcement season, which means that there’s a decent chance that an appified version of the Apple TV will be announced. There’s a lot that’s obvious about it (it’ll run iOS, its hardware will presumably close the gap with iPhone / iPads), and there are some big questions (what’s the input method going to be, in particular), and, knowing Apple, there may be a complete surprise somewhere.

What I’m wondering, though, is: what’s the business model going to be for the device? The current version is priced at $99: basically, it’s priced like an accessory to your iPhone and iTunes. And I imagine the hardware is cheap enough that they can get a reasonable profit margin for the current hardware at that price, though who knows.

I would also imagine, however, that that profit margin will disappear (and possibly go negative) if its guts get noticeably ramped up. (Which will be necessary if they’re really trying to take a swing at the core game console market; I’m not 100% convinced that Apple is going to do that, but I think they probably will?) And I also imagine that there’s more to the profit margin than just selling price minus manufacturing cost. And, of course, it’s not at all given that a more powerful Apple TV will stay at the $99 price point, though if the only option is, say, $199, then that will reduce its effectiveness as an iPhone / Apple content ecosystem accessory.


I could be wrong about the margin disappearing if the guts get more powerful: Ben Thompson presents some figures based on an IHS report that suggest that, actually, an Apple TV based on current hardware would cost about $99 to make. Of course, Apple likes their profit margins, so I don’t think they’d actually sell it at $99; but something in the $149 – $199 region might work? And maybe they’ll keep around the current version for people who really just want an accessory? I’m not sure.

The other aspect of pricing is actually the one I’m curious about. While grocery shopping today, I was listening to John Gruber, among other things, blast traditional windows PCs for the crapware that comes with them; that’s mercifully absent on most Apple devices, but the one big exception is the Apple TV. Not that the Apple TV comes with anything that’s as bad as Norton nagware, but still: the device is full of stuff that I don’t care about, that I would never install and don’t want on my screen. And I assume that Apple is not doing this out of the goodness of their heart (though admittedly, in the absence of an app store, it does help users to have some of this), I assume money changes hands.

But I also assume, based on Apple’s past behavior, that most or all of those third-party apps are going to stop coming by default with an appified version of Apple TV: people can download what they want. What I’m not sure about is if Apple cares about that. It does feel to me like, to a larger extent than on the iPhone / iPad, the most important apps on an Apple TV are going be ones where Apple doesn’t have a natural connection to the revenue stream, because they’ll be free and accessible via non-Apple subscriptions, so the crapware money would be going away.

Maybe I’m wrong about that last sentence, though: because my understanding is that the Apple TV Netflix app does provide an option to subscribe to Netflix via your iTunes account? So, if that’s the case, maybe that will be the norm for media apps on the Apple TV: they’re accessible via third-party subscriptions, but Apple will require vendors to provide an option to subscribe via your iTunes account, and because of inertia, Apple will actually make a quite decent amount of money through that? That makes sense now that I type it out; I’m curious if Apple takes a 30% cut from subscriptions like that or if they take a smaller one. And it certainly beats having crapware preinstalled.


So, probably no big mystery here: the device will be cheaper to make than I initially thought, Apple will raise the price enough to get a decent profit margin, and the crapware fees will turn into more up-front subscription fees? I’m still curious how it will compete as a game machine: I’d imagine it will be significantly cheaper than either the Xbox One or the PlayStation 4, and I imagine that much of that is because of significantly lower performance. (How does the A7 GPU compare to modern PC GPUs?) Metal will help compared to previous iOS versions, but I assume that’s just bringing iOS up to parity with the console world in terms of architecture tax. And, hey, if lower GPU performance means that the latest shooters don’t work on the Apple TV, that’s perfectly fine with me…

There are potentially other ways in which iOS 8 will help the Apple TV: Muttering suggests that app extensions will offer some interesting controller possibilities, and Macworld raises HomeKit possibilities. That latter article in particular gives other reasons why Apple might be willing to keep the margins a bit lower on the Apple TV than on their other products: it could continue to evolve in its current role as a piece of plumbing that helps the ecosystem as a whole thrive.

And, of course, it’s entirely possible that we’ll have to wait until 2015 for the new Apple TV to materialize: this fall is clearly going to be a more-interesting-than-normal Apple product announcement season, but it seems like knowledgeable people are more confident that there’s going to be a big wearables announcement than a big Apple TV announcement? I certainly don’t know, and Apple is clearly capable of waiting until something is ready. I’m mostly just looking forward to replacing my phone, and I’m curious about Continuity (enabled, in my case by Family Sharing).

ascension: rise of vigil

August 23rd, 2014

Some of the Ascension sequels I’ve enjoyed as much as the original; sadly, Rise of Vigil was not one of them. The new mechanic this time is a third currency, called “energy”: unlike the other two currencies, though, this one doesn’t go away until the end of the turn. Instead, many cards gain special effects if your energy level is above a certain threshold when you play or acquire them.

The energy-providing cards almost always come with card draw, so they don’t clog your deck. Also, the standard energy-providing card never shows up alone to purchase, instead other cards will randomly show up with one or more energy cards under them. So this means that, sometimes, you have to decide whether to buy a card that would otherwise be suboptimal in order to get more energy, and because of the card draw, the energy itself is always a good thing.

Which could be okay: it encouraged me to to buy cards I otherwise wouldn’t consider, and variation is always good. But the flip side is that I didn’t feel like I was building up a strategy in response to the cards available for purchase: instead, I was ending up with a random-ish hand in order to maximize energy. Or, to look at it another way: the limited card row meant that the game already had a mechanic encouraging me to mix it up; I didn’t find it helpful to have a second such mechanic.

Of course, I didn’t have to focus on energy, and indeed sometimes I didn’t. The thing is, though: some of the energy effects are crazy-powerful, so if you skip energy, you’re shutting out the possibility of the most powerful strategies, and those powerful effects and energy are both plentiful enough that you’ll probably lose in that situation. Energy effects can turn a cheap, bad card into a card that can acquire any hero for free, they can turn a powerful card that can defeat any monster into a card that can defeat all monsters.

There’s probably more balance than I’m giving the game credit for: I didn’t play it enough to get a super-solid feel of it. And part of that isn’t Rise of Vigil‘s fault: I’ve got other ways to spend small chunks of time. Still: not my thing.

monument valley

August 15th, 2014

Earlier this summer, I stopped my playthrough of BioShock because, frankly, I was getting angry at the game. I didn’t want to spend my time going through grandiose facilely unreflective morality plays: I wanted to play games that were more closely crafted. I’d seen screenshots of Monument Valley and heard good things about it: I was quite optimistic that I would feel a lot better playing through it than continuing through BioShock.

And I was right. I mean, I don’t know that I was right, because I didn’t run the experiment of continuing through BioShock, but I do know that spending a couple of hours on Monument Valley was a thoroughly enjoyable palate cleanser, just what I needed at the time. (And it would have been quite enjoyable even if my palate hadn’t needed cleansing.)

Not that I have much to say about Monument Valley: I partially blame my not mentioning it until now on my lack of energy this summer, but only partially. I think I have a post touching on puzzle games lurking in my head, and hopefully it will make it out soon, but it’s not out yet, and I suspect Monument Valley won’t be the most relevant game even to that post. So, I’ll just say: if you’ve seen screenshots from the game, you know whether or not you’ll like the visual aesthetics of the game, and I enjoyed the puzzles as well. More of this, please.

on “on scorched earth”

August 11th, 2014

Brendan’s recent post “on Scorched Earth” lamented that the Netrunner card Scorched Earth was “inelegant”. I can see where he’s coming from—I’m certainly not going to claim that Scorched Earth is a paragon of elegance—but I think he undersells the card. In particular, while I think his alternate proposals would all make for interesting cards, I think that Scorched Earth enriches Netrunner in a way that his proposed replacements wouldn’t.

There are two ways for the Corporation to win in Netrunner: by scoring seven agenda points, or by forcing the Runner to discard more cards than are in their hand. These aren’t parallel—the former is the primary way for the Corporation to win, and most of the interactions revolve around that mechanism—but they’re both important, because without the latter, the Runner could be a lot more careless. For example, the presence of Snare means that, if the Runner has fewer than three cards in hand then they should think twice before running on a server with an unadvanced card in it, or even running on HQ or R&D: if they hit a Snare, they’ll lose the game.

In go, there’s a concept called “honte”: this translates as “proper move”. When responding to a situation, you’ll have different ways to play, but frequently local pattern matching will mark one of them as proper. That doesn’t mean that that proper move is always the best move—sometimes the global situation suggest otherwise, and sometimes detailed reading of the local situation will reveal that the honte isn’t the best move even locally—but nine times out of ten, it’s the right thing to do. These proper moves sometimes look a little slow (especially for those of us who aren’t good enough at the game to appreciate the down sides of not playing the proper moves), but if you stick to them, you’ll generally end up with a solid position while your opponent’s risks mean their downfall.

In Netrunner (and indeed in most other games!), this concept of proper moves also appears. It’s more likely to appear in a negative sense in Netrunner than in go: as discussed above, Snare means that it’s generally not proper to make a run with fewer than three cards in your hand, for example, and the threat of tags (which Snare can also produce, it’s pretty vicious!) also means that in general it’s not proper to run on the last click of a turn, because otherwise you’ll be tagged during the Corporation turn. So it’s not so much that certain moves are proper as that certain moves are improper; it boils down to a similar effect, though.


So, to sum: card damage is one route to winning, but that’s not its main role in Netrunner. It’s mostly there as a mild risk task on the Runner’s actions (at least mild if the Runner doesn’t overweight loss aversion), and by playing proper moves, the Runner can almost always avoid losing for that reason. In particular, it’s almost impossible for the Corporation to create an active strategy to win via card damage.

Or rather, it would be almost impossible for the Corporation to do so without Scorched Earth. Because Scorched Earth is one of the few cards that lets the Corporation cause significant amounts of card damage during the Corporation’s turn. (In the core set, the only other such cards only cause the Runner to lose one card at a time; I haven’t exhaustively surveyed the expansions, but I think it was about a year before second such damaging card showed up, with Punitive Counterstrike.) So, without Scorched Earth, the Corporation would have no active way to win by card damage; and while I do think it’s better for the Corporation’s winning strategies to be focused on scoring agendas, I also think it would be a shame if there weren’t any active card damage routes to victory at all.

And it’s not like a Scorched Earth win is easy to pull off. You need two of them to flatline the Runner (assuming they keep their hand properly full), so even assuming that the Corporation has three of them in their deck, the Corporation will expect to have to get to make it through most of their deck to have a chance of a Scorched Earth win. And to pull it off, you need the Runner to be tagged; but almost all of the ways to get tagged take place on the Runner’s turn, giving the Runner chances to clear their tags. (Or they can try to be careful and avoid getting tagged at all.) There are ways for the Corporation to tag the runner during the Corporation turn (SEA Source, for example), and I’ve certainly won my fair share of games by playing SEA Source plus two Scorched Earths, but doing that is going to require the Corp to have noticeably more credits than the Runner (enough to make the SEA Source trace stick while having credits left over for the Scorched Earths), so the Runner can foil that plan by staying rich.

And, if that weren’t enough, there’s another way for the Runner to foil the plan: it’s not in the core set, but the very first Netrunner expansion introduced Plascrete Carapace. One Plascrete Carapace is enough to protect against a Scorched Earth, so once that card became available, the proper move for the Runner when deckbuilding was to include two Plascrete Carapaces in their deck (and they’re neutral cards, so anybody can do that): that’s enough to stack the odds significantly in the favor of the Runner in the Scorched Earth battle.


So, to sum: the Runner can foil Scorched Earth by keeping four cards in their hand at the end of their turn, by avoiding ending their turn tagged, and by keeping up with the Corp on economy; these are all proper moves anyways. If that’s not enough to make the Runner feel confident, then throw in a couple of Plascrete Carapaces. Also, odds are that it will take a while for the Corp to draw enough Scorched Earths plus tag generation to win that way even if the Runner doesn’t have Plascrete Carapaces, so this also encourages the Runner to keep up the pressure on the Corp, which makes for a more exciting game all around.

Or at least a mostly exciting game all around: in Netrunner as in go, you don’t always want to play the “right move”, putting Plascrete Carapaces in your deck just to protect against Scorched Earth is grating, and you can’t simultaneously put pressure on the Corp while stockpiling money and limiting your runs. There’s a flip side for the Corp, too: you always have to do some work to pull off Scorched Earth even if the Runner lets their guard down, because Weyland doesn’t have a tag generation (in fact, no identity other than NBN has a lot of tag generation), and the splash cost is so high that you’re going to use at least 8 and probably 12 of your 15 influence on Scorched Earth if you do go that route. So this means that very few Jinteki or HB Corp decks will include Scorched Earth at all, and even NBN and Weyland decks will frequently find it better to focus on something else.

And this is where things get interesting. Given those calculations, is it really worth it for the Runner to waste two deck slots on Plascrete Carapace? Or, going a step further: some Runners will build their deck so that, in the absence of Scorched Earth, nothing horrible will happen to them if they get tagged. So, while I said above that the “proper move” for the Runner is to avoid ending your turn tagged, you can also decide to play as the Runner in a way that embraces the possibilities of getting tagged, accumulating tags right and left. If you do that, you’re vulnerable to Scorched Earth (as well as other cards, e.g. the dreaded Psychographics / Project Beale combo), but the rewards can be huge:
it turns Account Siphon from a card that (at its best) takes three clicks to get an 11-credit Corp/Runner swing into a card that (at its best) takes one click to get a 15-credit Corp/Runner swing, which is enormous. Scorched Earth probably has a larger effect on the game in the way in which it puts a real bite into tag calculations than in the possibilities that it opens up for an affirmative strategy to win the game; and yes, I would call that an elegant design choice.


Don’t get me wrong: I don’t feel like I’ve been clever if I win the game as NBN by using Scorched Earths. (Though if we’re looking at NBN inelegance, I’d cast my eye first at AstroScript instead.) But I’m glad it’s there to open up my possibility space as a Corporation when deckbuilding, and to strike fear in the heart of the Runner when they don’t know if I have Scorched Earths in my deck. (Or when they’ve caught sight of a Scorched Earth in my hand and are suddenly stepping much more gingerly, wondering when the inevitable second one will appear.)

And, on the Runner side, I need to embrace probability: make judicious bets, figure out which risks are the right ones to take given my current state of the game and what I’ve seen about my opponent’s hand. If they’re NBN, I’ll try to figure out if they’re focusing on Scorched Earth to complement their tagging; if they’re Weyland, I’ll be very nervous about any glimpse of tag creation that I see in their hand.

And then, every once in a while, I’ll build a Jinteki deck with a single Scorched and just enough tag generation to get it stick right after the Runner has stumbled into some net damage. If that combo lands, it will cause the Runner to say many things about the situation, but I suspect that “inelegant” won’t be the first adjective that they’ll use.

energy and pain

July 31st, 2014

I haven’t written here much recently; and I haven’t been working on my recent programming project, either. That’s not a sign that there’s something else that’s grabbing me: I just haven’t had energy during the evenings for about two months now. So it’s been easiest just to read blog posts or watch TV or play games; I’m actually really glad I’ve been replaying the Phoenix Wright series, and I’m quite enjoying Miss Fisher’s Murder Mysteries as well, I just wish there were more evenings when I was doing something else.

For a lot of the time, it was simple tiredness. My allergies were out of control in the winter, but they got a lot better with the help of nasal rinsing. But then they came back; I’ve bumped up my Claritin intake again, and the allergies are mostly under under control now, but not great. My current theory as to what was going on there was that the bathroom remodel kicked up a lot of dust, so I was hoping that it would get better once the bathroom remodel was over. (The remodel has turned out great, incidentally!) I’m not convinced my allergies are completely better now, though, so maybe there’s something else going on.


Or rather: maybe there’s something else going on with the allergies: there’s certainly something else going on in general. A couple of times over the last year my back spent a couple of weeks hurting; that happened again this summer (starting just before the construction work), and it just didn’t going away. Eventually, I went to see a doctor and started physical therapy; the therapist gave me some stretches, and it’s actually been a week since my back has hurt.

Which would be good, but my symptoms have moved: my leg started hurting, with a range of symptoms, symptoms that are triggered by a different set of actions (sometimes including just sitting down for a minute) than my back problems. That leg pain first appeared Saturday a week and a half ago, and as my back was getting better, my leg was getting worse; I went to the doctor last Thursday, and then that evening I had problems sleeping for the first time (noticeable ones: I woke up at 2:30am and couldn’t get back to sleep), so I asked Liesl to drive me to the doctor the next morning as well, and just sitting in the car was super painful. I couldn’t see my regular doctor on Friday, and I wasn’t too impressed by the doctor I did see, but I got some stronger pain relievers.

The good news is that my symptoms haven’t gotten any worse since then, and I’m pretty sure they’ve gotten a little better. (Though I don’t know for sure if I could sleep through the night without the drugs; also, while sleeping on my back is much less painful than sleeping on my side, it is really not something I enjoy.) But they are still very noticeable, and noticeable in ways that honestly kind of worry me: for example, I wanted to drive to the grocery store a couple of nights ago to pick up some replacement tomatoes; I only made it to the end of the driveway before deciding that, no, driving a car still really was not a smart idea for me. And walking home the last couple of days, I’ve noticed that the toes on my right foot are curling a little bit (and in retrospect I actually think I’ve seen signs of that for months), so there’s some odd physical behavior going beyond just pain.

So right now I just don’t know what’s going on. It seems like I’ve got a pinched or inflamed nerve somewhere; presumably caused by the back issues, but I don’t know why these new symptoms have appeared, or what to make of the fact that they’re appearing as the back pain is disappearing. And having the symptoms mutate is getting tiring; and it’s also tiring trying to figure out how to arrange myself over the course of my day to deal with this. (How much standing, how much lying down, when sitting down works at all.)


Or, to come back to what I started with: how to avoid being in pain during the evening. It’s hard to write or program at my computer if I can’t sit down and feel like I’m relaxing at least a little bit. Working standing up is a possible way to deal with that; but if I’m spending much of the day at work working standing up, then I don’t really have the energy to do that at home. So it’s a lot easier to watch an episode of Miss Fisher and then lie down on a sofa reading blogs for a little while until it’s time to go to bed.

This isn’t awful or anything: more a reminder of my mortality. And I think the exercises my therapist are recommending really are helping: not only is my back not hurting, but my back feels like it’s moving well. But I hope that their suggestions plus the anti-inflammatories I’ve been taking for a week start having an effect on this pinched nerve soon.

brenda romero: jiro dreams of game design

July 13th, 2014

It’s months since GDC, and I’m still trying to unpack my feelings about Brenda Romero’s Jiro Dreams of Game Design talk. Or maybe not so much my feelings about it—it’s an excellent talk, no question—but my emotional reactions to it. Her talk confronts concepts that I care about (greatness, team structure, creation) in contexts that I care about (games, food), leaving me with immediate reactions to almost everything she said, but immediate reactions that were frequently in conflict, and with me quite sure that there’s a lot to think about beneath those immediate reactions.

I watched it again last night; I’m still not sure what I think, other than that I’m now glad I’ve seen it twice! But, trying to put together some thoughts:


She talks a lot about wanting to be great, and about the effort necessary for that. And this is where a lot of my insecurities with respect to the talk come in. Because, of course, there’s a part of me that wants to be great: who doesn’t want to be great? In the abstract, after all, it sounds, well, great. But, when it comes down to it: I am not behaving in a way that has led or will lead to me being great at anything.

Don’t get me wrong: I am egotistical enough to believe that I’m pretty damn good at some things, and even that I maintain a fairly high level of standard (relative to an appropriate baseline) at a fairly wide range of things. For example, I’ve largely made my living as a programmer for the last decade, and I’m pretty sure that I’m a noticeably better programmer than most professional programmers.

But I’m equally sure that, in an important sense, I’m not a truly great programmer. There’s nothing wrong with this, and for that matter my bar for greatness in that field may well be abnormally high: but there are significant ways in which I don’t meet that bar.

And her talk pointed at a few reasons why that might be. One is that I’m not quite obsessed enough. She talks about thinking about games from when she wakes up to when she goes to sleep; I think about programming quite a bit, including at odd hours, but it’s not that same sort of all-dominating passion that she projects. Another is that I don’t put in the hours; that’s a related concept but not at all an identical one, I’ll come back to that below.

Also: I don’t feel creative enough. Now, I’m not sure if I think that’s actually necessary for greatness, and for that matter I’m not sure how much Romero thinks it’s necessary for greatness. But it feels to me (and this goes way back, it’s not just my most recent decade) that I’m abnormally good at quickly coming to grips with others’ ideas and using them in productive ways, but there’s a certain seed of novelty that I’m not particularly good at.

Or, to put that last paragraph another way: I can be a quite good craftsperson. And that’s important to me, and for that matter it’s important for greatness. I was about to write: but maybe something’s still missing there? Now that I type this out, though: being a great craftsperson isn’t a contradiction in terms, it’s just a quieter sort of greatness.

So, I guess, if I were going to be great, that’s the sort I would be! But I still would need more passion and to put in more hours.

Actually, rereading this section: I think there’s something wrong about my angle here. What’s important in this context isn’t people being great, it’s works being great. And Romero’s talk is about great works, not (or at least not just) great people. When she raises and rejects the Triad of Constraints, for examples, she does so in the context of producing a great work. Hmm.

Teams, Control, and Responsibility

As is obvious from the talk’s title, Romero brings in food metaphors, metaphors from chefs and kitchens. But Jiro isn’t the only chef she talks about; in particular, she talks about Gordon Ramsay several times, and this was the part of the talk that I had the strongest negative emotional reaction to. Some quotes from that portion: “He had to get all these people to do what he wanted them to do”; “They screw up and he’s the one who’s going to get blamed”; “Screw it up? People remember YOU”; “Control your team or your team controls you”; “My standards, my rules, my kitchen”. (Those last two are Romero quoting Ramsay, I believe the others are her description of what she saw.)

This is a mindset that I have zero interest in: I want nothing to do with command and control, and I want nothing to do with team structures consisting of one guiding light and other people whose job it is to implement that person’s directives. And there’s an undercurrent of fear mixed into that egotism that I think is unwarranted on both parts: I simply have no idea who the chef is in, I believe, any of my favorite restaurants. I do not, admittedly generally patronize restaurants that have been awarded Michelin stars, but I’ve been to one or two, and I don’t think that would make a difference in my awareness of the chef’s name unless the chef decided to engage in self-promotion. For games, it is more frequent (but by no means universal) that I can name the lead designer of my favorite games, but even so: my focus is on whether the game is good, the designer is an afterthought.

So no, people won’t remember you, they’ll remember your work. And not your (singular) work but your (plural) work: the work that the team that you are part of produced. As I belatedly said above: great works are what’s important, great people are a secondary concept.

And yes, great works will (usually!) have a strong, coherent vision at their core. And yes, having that vision come from one person is one way to get there. But what’s important is that the vision is shared and made real by the team; and, as a programmer an in my prior life as a mathematician, I have a lot of experience working with visions that feel stunningly real because they’re a fundamental part of how the world works, or how our shared conception of the world works. So we can all work together to understand what zeta functions really are, we can all work together to understand what simple design really is. And there are tools to let groups of people express and produce works of shared beauty, groups don’t have to invent that from scratch.

Romero does not, fortunately, spend all of her talk embracing the Ramsayan end of this spectrum: I don’t believe, for example, that she thinks that game designers should be dictating the details of how programmers write code to support the game’s vision. And, once I got past my revulsion at the command-and-control aspect of this message, there’s a part of her message that I liked rather more. For your team to produce something great, your team has to do great work, and that won’t happen if you don’t feel responsible for that to happen. In Romero’s narrative, the “you” is a single person in charge of the team, but she also talks about trusting and helping your coworkers to do great work; in my version, it’s everybody’s responsibility, but that most definitely does not mean devolving into greatness being nobody’s responsibility. Instead, we all need to work together to figure out what great work means, to do great work ourselves, and to help others to do great work.

Food, Games, and Software

Romero is a game designer, and she talks about chefs. I am neither; and, listening to the talk made me wonder if those two fields are related in a way that programming, or at least the sort of programming that I do, isn’t. Both of those fields are, in large part, about crafting experiences: in fact, she goes out of her way to talk about how the best restaurants (at least when looked at through a Michelin lens) spend time on the experience of dining there writ broadly, not just on the food. Everything is there because it has a reason to be there, everything is done with intent, with focus, with care and craft.

That last sentence is also characteristic of great programs. But it’s a characteristic that’s only visible from the point of view of somebody working on the program; writing a program that way has an effect on the experience of somebody using the program, but that effect is not direct.

Of course, programs have an experiential component as well, and this aspect of greatness makes sense in that context as well; and that leads to a form of greatness that is directly analogous to what Romero talks about in food and games. (Indeed, given that much of her work is on video games, she is talking in part exactly about this aspect of great software!) But, returning to the previous section on teams striving together for greatness: a cross-disciplinary team striving together for greatness is going to be focused on that experiential side of greatness instead of the external side of greatness, because that experiential side is something they can all perceive and affect.

As a programmer, which do I care about more? I care about them both, of course, and they’re related. By writing great software as measured through the internal lens, I can affect its external greatness in a couple of ways. One is that well-crafted software is, in an important sense, unobtrusive to the user: it responds quickly instead of making the user wait, it is consistent instead of imposing a cognitive load, it doesn’t crash or have bugs. And another is that well-crafted software is responsive to the needs of people who are designing that experience: as somebody like Romero is experimenting to try to tease out the core and then refine the details of a vision, great programmers can help by producing software that they can adapt as quickly as possible (or even provide hooks to let designers adapt it themselves) to actively help that process.

As I said above, though: I’m a craftsperson at heart, and so my focus is internal. But one of the aspects of agile that I’ve internalized well is the desire to write code in order to meet real user needs and desires, and to enable quick experimentation to discover how to best meet those desires. So I would prefer to be part of a company that wants to write great software to deliver a great experience, and if a company fell down too far on either measure of greatness, I wouldn’t join it. Having said that: my bar on what I’m willing to consider on the programmer craft side of things is quite a bit higher than my bar for the user experience side of things.

Obsession and Time

I don’t think I’m obsessed enough to produce really great work. Which isn’t to say that I can’t get pretty obsessed at times: over and over again, I’ll dive into some aspect of learning (frequently but not always software-related), read the most important books on the topic, dive into discussions on the topic, experiment on the topic, and repeat it until I feel I’ve internalized something at the core of that topic. But listening to Romero’s talk (this one and others): I’m not as obsessed with programming as she is with games. Also, my obsession quiets down when it gets to the stage when I feel like I understand what’s going on in some area: my compulsion is to build a world view, not to create. (And, in practice, being a craftsperson is where I end up in the middle.)

There’s another question here, though: totally aside from obsession, how many hours are you willing to put in? Her talk refers to crunch as a fact of life in the game industry; it’s not a fact of life, and I work to make it not part of mine. I’m honestly not sure to what extent my refusal conflicts with greatness: part of extreme programming is the claim that putting in more than about 40 hours a week is actually counterproductive over the medium term, because it dulls the brain and you start writing worse code. It’s clear that there’s a value of N where working more than N hours a week is counterproductive if your goal is greatness, and there are industrial studies suggesting that productivity maximizes out at around 40 hours a week.

And I mostly buy that cap of 40 hours, but not completely. For example, in Chapter 38 of The Cambridge Handbook of Expertise and Expert Performance we have the claim (in a section studying violin students) that

All groups of expert violinists were found to spend about the same amount of time (over 50 hours) per week on music-related activities. However, the best violinists were found to spend more time per week on activities that had been specifically designed to improve performance, which we call “deliberate practice.”

And a cap a little above 50 hours feels more right to me than a 40 hour cap. But in a context of trying to produce great work, it raises some caveats:

  1. That study is about learning, not about producing. Admittedly, any part of great work is going to involve learning even as part of the production of that work; in fact, maybe it’s impossible to do great work without learning all of the time. (Though the converse is certainly not true: novices are learning but not producing great work!) But still: that study is measuring something different.
  2. The part about deliberate practice is super important. To me, this dovetails fairly well with a striving for greatness: part of doing great work involves being deliberate in what it means for work to be great, and Romero discusses the importance of having your colleagues look over your work on multiple occasions in your talk, which dovetails well with the importance of having a coach in deliberate practice. Maybe we should take a lesson from etymology here: great work requires deliberate practice, where by “practice” we return to the meaning of “do” or “act”.
  3. If we go with 50 hours, then I’m not sure what the texture of those 50 hours is going to be, but I’m almost positive that it’s not going to be 10 consecutive hours a day, five days a week. (Or 8 hours a day 6 days a week, or what have you.) Certainly during the times when I was (quite effectively) trying to become an expert in a subject, it would pop up in my life much more broadly than that: for example, Liesl and I had a habit on vacations where we’d be going through rooms in a museum, I’d go a little faster so I’d get a few rooms ahead of her, and then I’d sit down on a bench and read more in one of the math books I was working through. And, actually, when I say I don’t put in the hours, maybe I’m underestimating that: I only put in 40 hours a week (in a standard 8 hour + lunch x 5 configuration) sitting at my job, but I think about my work quite a bit at home, and the very act of writing this blog post is another part of my deliberate practice at getting better at my work. The flip side, though, is: I am not trying to do great work during most of those 40 hours that I do spend at work. So I should probably focus on improving that last bit!
  4. Even if producing and sustaining expert performance is most likely to come from working 50 hours a week, it absolutely does not mean that working 50 hours a week is at all likely to produce expert performance. The vast, vast majority of time, working long hours just means shoveling more crap; I have no doubt that that’s what’s going on almost all of the time when companies ask employees to put in crunch time.

When I put this all together, to me it leads to two recommendations:

  1. First, focus on being deliberate about producing great work. Constantly ask yourself and others how your work could be better, how your processes could be better, what the goals are that you should be striving for in the first place.
  2. Second, listen to your energy level. Producing something even on the small scale that you’re proud of can be very energizing: at its best, doing great work can lead to a feedback loop where you have more energy to do more great work. But once you push yourself too hard, then your work starts to dull; pay close attention to that shift.

I think that second point is where Romero’s obsession gives her a big edge: thinking about games and working on games clearly energizes her. I make a different set of choices, ones that are probably more similar to Johanna Rothman’s.

The Triad of Constraints

When producing something, you want to do it quickly, cheaply, and well; the Triad of Constraints claims that you can pick two out of three at best. To which Romero’s answer is refreshing: fuck picking two, just pick one, make it great.

As she also acknowledges: this can work if you’re producing your own games on your own time; when you’re working as part of a business, telling the people who control the budget that you’re going to ignore speed and cost doesn’t work so well.

I’m not sure that that works so well for me personally as a programmer, though. My focus is on evolving software through as small steps as possible, with an external Product Owner prioritizing the customer-visible features. That means that, at any stage, I want to have written software that’s as good as I can have written in that amount of time, while preserving the ability to continue to do so in the future.

So I’ll alter the triad in the opposite direction, by picking all three. I’m very self-centered, so from my point of view, the cost is generally fixed, it’s my salary, I’m not going to magically produce twice as much work or twice as good work if you pay me more. And I certainly agree with Romero that I want to produce great work. And then the scope is what it is: you’ll get a different product if you ask for the best I can produce in a week than the best I can produce in a year, but in any case you can pick the scope however you want. Or, to put it another way: the Triad of Constraints implicitly assumes that you’re making choices up front instead of evolving; why would I want to do that?

Of course, I’m just punting certain decisions over to a Product Owner; Romero is more the Product Owner herself. That’s the way to approach the control aspects that I discussed above in a way where I’m less dubious: deciding on the sequence and details of user-facing features is an important role, no question.

Works and Creation

She has a comeback to my evolutionary design boosterism: she has no patience for the concept of the Minimal Viable Product, whereas to me it seems like an obviously good step in an evolutionary design.

But I’ve spent my entire professional career on software that is designed to be used and grow over the course of years, even decades. This is very different from a more traditional sort of creative work: where you release a work into the world, let people experience it as a whole, and move on to producing your next work.

And I’m not nearly as convinced about Minimal Viable Products or evolutionary design in the creative work arena. When I’m reading a book, I don’t want to start by reading a minimal version of that book one month, then reading a slightly more fleshed out version a couple of months later, then reading a third version that retreats in some areas based on user feedback and moves in a different direction: I just want to pick up a book and read it. And the same goes for games, much of the time, though admittedly less universally these days.

This doesn’t mean that evolutionary design doesn’t work in a context of polished creative works: you can still produce them iteratively, you can still solicit feedback from a trusted close circle at frequent intervals. And, as she says: “what if I made something as good as I possibly could every frigging day?” That’s one of the lessons she learned from Jiro: he ships every night.

So we’re returning to what I said above: work in small steps without sacrificing quality. I combine this with handing scope decisions off to a third party; she is in charge of scope, and she works in an industry where the scope that you choose for a product when it is released externally is a crucial decision.


Or at least next steps: I like evolution, after all!

One is that I should work harder to be doing my best during the times when I am working on something. If I’m spending the time on something, why not spend the time being focused and doing the best work I can? If I’m not going to do that, it’s probably better to not spend that time: instead, spend the time in a way that lets me get my energy back so that I can focus later.

And the other is that I should seek out greatness more. I’ve worked with one person whom I consider unquestionably great; or at least I worked in a startup that he cofounded, we rarely interacted at all. But, even so: those few interactions were incredibly energizing. (I was talking about those interactions with a friend of mine a couple of months ago, she said she’d rarely heard me sound so excited.) I should try to find more of that; I should try to deserve being around more of that.

returning to bioshock

June 14th, 2014

After my unpleasant experience with System Shock 2, I moved on to BioShock. I wasn’t worried that I might have the same problems with BioShock that I had with System Shock 2: I remembered from my prior experience that BioShock took the Easy difficulty setting seriously (enough so that I was thinking of trying Normal on the replay), and the RPG aspects were dialed down and didn’t allow for the same sort of missteps I’d made in System Shock 2.

As it turns out, though: I stopped playing BioShock after the Medical Pavilion level. Not because the game was too hard (I made it through okay on Normal, certainly more easily than I did with System Shock 2 on Easy), but because of narrative reasons.


Which is a pity, because there were two aspects of the game that were flat-out amazing, one grand and one a little more localized. The grand aspect was the setting itself: the idea of an underwater city, the execution of the architecture (both in its original and ruined aspects), the music and sound design, etc. And the localized aspect was the idea of a cubist plastic surgeon: that’s a wonderful concept to build a level around.

I would have loved a game that went all in on those aspects. Given those two elements, probably the most natural way to flesh them out would be as a slowly paced horror game: one with enough breathing room to let you drink in the environment, but that still lets Dr. Steinman and subsequent characters show through in their glory. And, of course, the actual game does contain horror aspects; but there’s just too much shooting of guns or plasmids, too much hacking of turrets and health stations, too many vita chambers for the horror game to have any conviction. Basically: there’s a part of BioShock that wants to be an RPG with class choices, that wants to be Deus Ex, and that part wins over the proto horror game.

Or, indeed, over any other potential realization of the game that would leave you more room to drink in the mood and setting. If only games would learn from Shadow of the Colossus that it really is okay to leave space…


Still: that alone wouldn’t have been enough for me to stop my playthrough. What really got to me is the treatment of the Little Sisters and the Big Daddies. I said more about this in my first playthrough of the game, but: the entire treatment of the Little Sisters is awful. When you meet a small child that you’ve never seen before, the two choices that should go through your mind should not be “should I kill this child or should I use this magical shiny thing I’ve been given to perform surgery on the child despite the her screams of protest?” Now, admittedly, this sort of iffiness isn’t without precedent in video games: it’s also the case that, if you happen to find yourself in a strange location and come across a gun, then you should not use that as justification for mowing down everybody you meet! But at least that choice has history normalizing it in a video game context, and at least you’re being attacked so you can reasonably consider yourself to be in a “kill or be killed” situation. Whereas with the Little Sisters, the game forces you to commit child abuse, and then has the gall to present one form of that child abuse as the “good” choice.

That’s bad enough, but it then follows it up with a Big Daddy encounter. And here, the situation gets, if anything, even worse. Again, people: if you’re in an unfamiliar, dangerous location, if you meet a small child wandering around, and if you meet an adult whom that child clearly knows and loves and who is protecting that child (and doing so remarkably capably, given the extreme danger of the environment), then the correct choice of action is not to kill that adult. The correct choice of action is almost certainly to treat it as none of your fucking business; if, instead, you decide to treat this as some sort of clever environmental puzzle encouraging you to figure out how to use the many tools at your disposal to dispatch the protector most efficiently, then you are a monster.


So no, I really wasn’t in the mood to go further with BioShock after the end of the Medical Pavilion. I’m willing to consider the idea of playing games where I’m a monster, though honestly I would generally far rather not. I’ve got a lot of respect for what I’ve heard about Far Cry 2 or about Spec Ops: The Line; but those games put you in a much more self-consciously morally complex situation than my reading of BioShock does, and they don’t have the player being actively complicit in child abuse as their main theme. Having said that, the Little Sisters aren’t even the main overarching plot aspect of BioShock; maybe those other plot themes are reason enough to go forward?

I didn’t go forward, so I can’t say for sure, I’m just basing the following on my memory of my first playthrough. But my memory says this: the overarching theme basically comes down to two things. One is a poisonous presentation of father/son dynamics: arguments about whether the father gets to tell the son what to do, or whether the son gets to do whatever he wants, killing the father in the process. And the second is, of course, Objectivism.

And, well, fuck that too. Both of these basically boil down to the same thing: man-children who are fighting among themselves about who gets to have their own way, with the rest of the world as collateral damage. And that fits in with the whole Little Sisters / Big Daddy treatment, too: women and children are subhuman pawns for those man-children to use and dispose of as they wish, and men who try to build relations and families are slightly more worthy of respect (they’re men, after all, and if they’re successful in a role of protector then at least they’re participating in the fight) but ultimately need to be destroyed.

If this were satire, it could be a depressingly biting portrait of certain aspects of society. (Including, I suspect, the AAA game industry; I’ll throw Silicon Valley startup culture into the ring, too.) But it sure doesn’t read that way to me: the game isn’t a pro-Objectivism presentation by any means, but the game structurally buys into enough of Objectivism’s conceptual prerequisites that, well, see above.


So: no more BioShock for me. I’m curious about Minerva’s Den, but not curious enough to dip into BioShock 2. (And I’m very glad that people involved in that game have moved in a different direction.) Everything that I’ve read about BioShock Infinite makes me think that that game would drive me crazy as well: a glorious environment combined with way too much shooting and an offensive and hamfisted treatment of narrative themes.

Instead, I went through Monument Valley as a truly lovely palate cleanser, and then started a replay of the Phoenix Wright games. And that was absolutely the right choice.

medium: browserify

June 10th, 2014

There’s one problem with the way I first set up my build system for Medium: I had no control over how the CoffeeScript files were ordered. In languages with linkers, this isn’t a big deal: within a library, the linker will resolve all the references between object files at once. But without a linker, ordering becomes more of an issue.

Actually, in CoffeeScript or JavaScript, it’s not that much of an issue: in fact, for small projects you can get away with ignoring it entirely. It’s fine for methods in one class to refer to another class that hasn’t been loaded at the time the first class is defined: as long as the second class exists by the time the class has run, you’ll be okay. So that means that the only real issue when starting off is making sure your entry point gets run after everything else is loading; that’s a one-off case that’s easy to deal with manually. (You can just inline the entry point code in the HTML file, for example.)


Having said that, just clobbering everything together like that felt a little distasteful to me; and there also turned out to be two practical issues. The first is that Mocha, the unit test framework I used (which I promise I’ll talk about soon!), didn’t use the browser model of sticking everything in global variables: it used the Node.js concept of modules. I actually spent a couple of weeks ignoring that mismatch, writing code that worked in both realms by checking to see if the Node.js variables were defined, but in retrospect, that was silly: the point of this blog post is that doing things the right way is easier than that workaround.

And the second practical issue is inheritance: if class A inherits from class B, then the browser really does need to have seen the definition for class B before the definition of class A. To get that right, I needed a dependency structure; and doing that by hand would have crossed the line from silly to actively perverse. So I looked around, and found that browserify (in its coffeeify incarnation) was what I wanted.


First, a brief introduction to the Node module system. When you define what looks like a global variable in a Node source file, it doesn’t actually get stuck in the global namespace: the namespace for that file is local to that file. But Node provides a special exports variable: if you want to export values, attach them to that. For example, if I have a file runner_state.coffee that defines a RunnerState class, I’ll end the file with

exports.RunnerState = RunnerState

That last line still doesn’t stick RunnerState in the global namespace: there’s actually a special global object you can use for that, but you generally don’t want to do that. Instead, if another file wants to refer to that RunnerState variable, it puts a line like this at the top:

{RunnerState} = require('./runner_state.coffee')

The return value of the require() call is the exports object for that file, and I’m using CoffeeScript structured assignment to get at its RunnerState member. Once I’ve done that, I can refer to RunnerState elsewhere in that file. (Incidentally, in some situations you don’t need either the leading ./ or the trailing .coffee in the argument to require(), but I found that using both worked best with the collection of tools I was using.)


So, that’s the Node.js module system: a nice way to avoid polluting the global namespace and to express your object graph. It comes for free in the Node ecosystem, and all I wanted was to bring that over to a browser context. And that’s where browserify comes in: it lets you write code like it’s Node modules and then it transforms it into a format that the browser is happy with.

To cut to the chase, here’s how to get it to work. Start with the build system from last time. Then install browserify and coffeeify, plus the grunt plugin:

npm install --save-dev browserify coffeeify grunt-browserify

In your Gruntfile.coffee, replace the grunt-contrib-coffee requirement with a grunt-browserify requirement, and replace the coffee block with a block that looks like this:

          'js/medium.js': ['coffee/*.coffee']
          transform: ['coffeeify']

Also, in your default task, you’ll want to invoke browserify instead of coffee.


Here’s the resulting file:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

            'js/medium.js': ['coffee/*.coffee']
            transform: ['coffeeify']

          'css/medium.css': 'scss/medium.scss'

        files: 'coffee/*.coffee'
        tasks: ['coffee']
          spawn: false

        files: 'scss/*.scss'
        tasks: ['sass']
          spawn: false


  grunt.registerTask('default', ['browserify', 'sass'])

Now, if you run grunt, you’ll build the output JavaScript file (js/medium.js in this case) like before, but with separate input files treated as separate modules! Which, of course, means that it won’t actually work until you go back through them and add require() and exports in appropriate places.

medium: setting up a build system

May 31st, 2014

After I set up Medium, the next thing I did was start writing code and unit tests. And I will write about unit tests in a couple of posts, but I want to jump ahead one stage, to a build system, because that was something that required workarounds almost from the beginning and turns out to be easy to set up if you know how.

Because, of course, if you’re using CoffeeScript and SCSS, you need a preprocessing stage to turn them into something that a browser is happy with. If you have a single CoffeeScript source file, then running the coffee command is not too crazy, but what if you have multiple source files? You don’t want to run coffee on each of them individually, and you don’t want to have to load each of the outputs individually into your HTML file (or at least I don’t!). The coffee command actually has a --join argument to handle this, so you can certainly work around this manually, but this is definitely getting to the stage where a C programmer would say “I would have written a short Makefile by now”.


In JavaScript land, though, you probably don’t want to use Make; there are various options for build tools, and the one I chose (which seems to be the most common?) is Grunt. To get started with it, you actually want to install the grunt-cli package globally instead of putting it in your package.json file:

npm install -g grunt-cli

This makes the grunt command available, but the smarts are all in the grunt package plus whatever plugins you use. Those you install via npm install --save-dev; a good place to start is

npm install --save-dev grunt grunt-contrib-coffee grunt-contrib-sass

Grunt’s configuration file isn’t in some custom language, it uses an internal JavaScript DSL for configuration. And you can configure it in CoffeeScript, too, which is of course what I did. So here’s a basic Gruntfile.coffee:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

          'js/medium.js': 'coffee/*.coffee'
          join: true

          'css/medium.css': 'scss/medium.scss'


  grunt.registerTask('default', ['coffee', 'sass'])

Pretty self-explanatory. (I have a bunch of CoffeeScript source files but only one SCSS file; eventually I may have multiple SCSS files, but even then I should be able to use includes to get a single entry point.) And, with that in place, I just type grunt and it builds medium.js and medium.css.

Of course, it does raise the question of how all those CoffeeScript files get combined into a single JavaScript file and what to do if you want to have control over that combining; I’ll explain that in my next post. But for now, this works as long as there aren’t load-time dependencies between your CoffeeScript files, and it outputs a single JavaScript file to load from your HTML.


I actually prefer not to have to manually type grunt each time I want to rebuild: I like to have Grunt watch for changes and build things every time I save. To get this to work, install the grunt-contrib-watch package and add a block like this to the initConfig section of Gruntfile.coffee:

        files: 'coffee/*.coffee'
        tasks: ['coffee']
          spawn: false

        files: 'scss/*.scss'
        tasks: ['sass']
          spawn: false

Also, make sure to add grunt-contrib-watch in the loadNpmTasks section. If you do this, then you can type grunt watch in one of your shell windows and it will rebuild whenever the appropriate files change. And yeah, it’s a bit unfortunate that you have to specify the file globs twice, but only a bit; if that really bothers you, I guess save those file globs in variables? (We are, after all, writing in a real programming language here.)


There’s one further change that

medium: setting things up

May 29th, 2014

As I said recently, I’m experimenting with writing a Netrunner implementation in JavaScript. I’m calling it Medium; here’s the first in a series of posts about issues I’ve encountered along the way.

Before I go too far, I want to thank two sources of information. The first is Bill Lazar; he’s one of my coworkers, and he’s given me lots of useful advice. (And I suspect still more advice that will be useful once the project gets more complicated.) The second is James Shore: just as I was thinking about starting this, he published a list of JavaScript tool and module recommendations that seems very solid.

Anyways: before starting, I’d made a couple of technology decisions, and they were actually to not quite use JavaScript and CSS: both are solid technologies to build on, but both have annoying warts that I don’t think are worth spending time to deal with. So, in both cases, I’m using languages that are thin wrappers around them: instead of JavaScript, I’m using CoffeeScript, so I don’t have to worry about building my own class system or explicitly saving this in a local variable when I’m passing a function around. And instead of CSS, I’m using Sass (or, specifically, SCSS): when writing CSS, you find yourself repeating certain values over and over again, so having a macro layer on top of CSS can really improve your code. Neither of these languages means that you don’t have to understand the language that underpins them, or means that you need to have to learn extra concepts beyond what the base language provides: they just automate some common tasks.

(Incidentally, once my CSS gets more complicated, I’ll probably start using Compass as well. I haven’t yet felt a strong need for that yet, and it’s possible that what I’m doing is simple enough that I won’t actually need Compass, but it seems like the next step once I start feeling that even Sass is too repetitive for me.)


This meant that I needed to install those tools. I won’t go into the details of installing Sass: basically, you need Ruby + RubyGems, both of which I already had lying around, and both of which are entirely tangential to this series. (If you’re on a Mac and aren’t already a Ruby developer, then probably sudo gem install sass will do the trick.)

CoffeeScript, though, requires Node.js and npm, both of which I was going to need anyways and neither of which I had detailed experience with, so I’ll talk about them a bit more. On my Mac, I used Homebrew for both of those (if you install Node with Homebrew then npm comes along automatically); on my Linux server, I used the Ubuntu-packaged version of Node, and I installed npm following the standard instructions.

I initially did a global install of the coffee-script npm module. But you really want to control that sort of thing on a per-project level, so you can specify what version of a module you want: and npm lets you control that via a package.json file. There are lots of options that you can put in that file, and I imagine I’ll start using a lot more of them once I use npm to actually package up Medium, but for dependency management you can ignore almost all of the options. So here’s a sample package.json file if you just want to use it for dependency management:

  "name": "medium",
  "version": "0.0.0",
  "devDependencies": {
    "coffee-script": "^1.7.1"

Try putting that in a package.json file in an empty directory and then typing npm install. You’ll see that it installs coffee-script along with a package mkdirp that coffee-script depends on, and it puts them in a new subdirectory node_modules.

You can look at the docs for the version numbering if you want, but basically: ^1.7.1 means that it’s known to work with version 1.7.1, and later versions are probably okay. This is totally fine while I’m working on something for development; for a serious deployment, I’d probably want to pin things down more tightly, including specifying versions of packages pulled in indirectly.

One nice trick: say that you have new package that you want to start using. Then don’t bother looking up the version number and manually adding it to package.json: instead just do

npm install --save-dev NAME-OF-PACKAGE

That will look up the current version of that package, install it, and add an appropriate line to your package.json file. So that way you can start using the latest and greatest version of your package and get it working, and you’ve saved the information of what that version that worked for you was.

On which note: you of course want to check package.json into version control. For now, I’m putting node_modules in my .gitignore file; if I get to a situation where I’m serious about deployment, then I’ll want to have a way to get access to node_modules without depending on external sources for that, but even in that situation, storing it in the same git repository as the source code is the wrong approach (because of repository bloat). For a personal project just for fun, ignoring node_modules is totally acceptable.


So with that in place, I can compile CoffeeScript files by invoking node_modules/coffee-script/bin/coffee. Which is what I did initially, but I got a more formal build system in place fairly soon, I’ll talk about that next.

men, women, programming, culture

May 25th, 2014

So, a couple of weeks ago, a prominent programmer / writer wrote a post whose driving metaphor was: frameworks are bad because it’s like one woman having many men sexually subservient to her, whereas the way things should be is for one man to have many women sexually subservient to him. People complained, he apologized and rewrote it without the metaphor in question.

Last week, another prominent programmer / writer tweeted a picture of some custom artwork he’d commissioned. That artwork showed silhouettes of a woman posing in a sexualized fashion, holding guns as if they were fashion accessories, with those silhouettes serving as shooting range targets. The artist has produced quite a lot of works on that theme, it turns out; his statement says “We are, all of us, Targets in one way or another.”


After this last weekend: some of us are a hell of a lot more targets than others of us. As the artist says, “None of us are exempt from exposure to these fixed cultural elements of our existence, or the means by which they attempt to impose their will upon us”, but that imposition takes radically different forms in different circumstances. He says that “[I] ask my audience to interpret each piece for themselves so as not to be hindered or influenced by my intentions”; the interpretation that I’m coming to right now is that men’s conception of gender roles in this society is super fucked up; that manifests itself in many ways, along a continuum of severity; and that I don’t see the software development community as a whole to be particularly at the innocuous end of that continuum.

Another prominent programmer / writer tweeted: “Seems to me we (again) review ideas for political correctness before considering the ideas themselves. I’m not sure that’s good.” Which raises the question: good for what? If your sole objective is to try to become as good a programmer as possible, then focusing exclusively on ideas and ignoring metaphor, subtext, social context may be a good strategy. I’ve frequently been in that situation myself, and I’ve learned quite a lot about programming from all of the programmers mentioned here. (Though if their books had been full of harem metaphors, I’m not nearly as confident that that would have been the case.)

Becoming a better programmer isn’t my only objective these days. There are a lot of problems in this world, a lot of directions along which to try to improve; programming ability is one of those directions, and I still have a huge amount to learn in my struggle to become a better programmer, but there are a lot of other issues that I struggle with, that I have a huge amount to learn about as well. And I think some of those other issues might even be a bit more important.

netrunner implementation experiments

May 22nd, 2014

GDC got me in the mood to do some game-related programming; and, when that mood didn’t go away after a couple of weeks, I started to spend some time thinking about what exactly that would mean. I’d thought initially that maybe I’d learn how to use Unity, trying to implement one or two game-related tech experiments I had in mind. But a lot of my game playing these days is in the form of board or card games, and some of those ideas were starting to pull at me a bit more; Unity’s 2D support has apparently gotten significantly better recently, but when I looked at some of their 2D demos, it was still intended for physics-based games, which isn’t so relevant for most aspects of board games.

And, thinking about it a bit more: I can probably just do a card game or a board game in HTML / CSS / JavaScript. (Not even pulling in the canvas stuff: I’m perfectly happy to represent a card as a div.) Which has huge advantages in terms of experimentation: I can work on it wherever, people can run it wherever, and it’s a super-easy way for me to get going.

It does mean that I won’t learn Unity, which is too bad. But the flip side is that I can use this project to catch up to speed with a lot of other technologies: it’s been over three years since I’ve seriously programmed in JavaScript, and that code base was out of date and badly-written even at the time. So this could be an excuse to learn about CSS3, to learn about more of the JavaScript ecosystem (which is continuing to grow like crazy).

Also, while I’ll start out with an implementation just in the browser, I’ll want to add a server-side component fairly soon on. And I can do that in JavaScript, too: if I use Node.js then I can move my business logic code from client to server side, or use the same code in both places as appropriate. (Thinking about that will also give me a good excuse to separate business logic from presentation, which is always a plus.) I’ve never used Node but it’s certainly in the list of technologies that I’m interested in.

And there’s a subtext of this that isn’t game-related: I imagine I’ll be at my current job for another year or so, but at some point I’m going to want to move on, so it’s not a bad idea right now to start thinking about ways to increase my options for a potential move. And brushing up on modern web technologies and learning about Node fit that bill quite well: I’ve worked as a backend developer in most of my jobs, but my guess is that I’d be happier in a group with more fluid roles, which means that brushing up my frontend skills wouldn’t be a bad idea, and I can also certainly imagine working professionally with Node in the future. Also, just building a full project from scratch is always educational.


So: the plan is to write a board game or card game using non-canvas JavaScript in the browser, with Node as an eventual backend. But that leaves out one very important aspect of this: figuring out what the game will actually be. If I had lots of card game ideas written down, I’d probably pick one of them; as is, though, I don’t, and I suspect that I’ll spend enough time playing with technologies, at least initially, that I won’t want to spend a lot of time on game design ideas.

So that, in turn, suggests reimplementing a game somebody else has written as an exercise. Yes, I’m quite aware of the problems around cloning, but that’s not an argument against doing something as a private experiment. (Think of this like an art student making copies of works in a museum.) And, when I phrase the question that way, an obvious candidate comes to mind: Netrunner. The game’s rules are more than complex enough to teach me a lot about the tradeoffs in the domain implementation side, it raises a lot of interesting questions about interaction models, and the only current electronic implementation that I’m aware of is one that I won’t be tempted to copy the details of. So it seems like a good place to start; I’m pretty sure that, once I’ve gotten a basic implementation of the game working (one identity on each side from core set cards, say), I’ll have learned a lot and will be able to take that learning in a lot of different directions.

What I’m not at all sure is how long this will take: it depends on how much time I carve out for it, it depends on how much I need to learn, and of course the Netrunner rules have a lot of special cases, even in the core set. I wouldn’t even be blogging about it at all right now, except that I’ve already learned a lot from the experiment: I’ve probably missed four or five good blog posts by not blogging about it from the start. I’ll try to recreate some of those, but still, not the same.

Netrunner initial placement experiments

For reference, here’s where I was earlier today (along with a corresponding view from the Corp side); I’ve been thinking about installation models and how to fit stuff on a not-excessively large screen. (Yay CSS transforms for resizing and for rotating Corp ice!) Once I get a little farther with installs, I guess I’ll try working on basic runs; that’ll be interesting…

And if anybody is designing a card or board game that you’d like a browser-based version of, let me know: hopefully in a few months I’ll have come to a reasonable stopping place on this experiment and I’ll be interested in using these technologies for something else.

system shock 2

May 14th, 2014

I’m planning to play through all the games in both of the Shock series this year; I had a quite good time replaying System Shock, but I’d never played System Shock 2, which seems to get talked about rather more. (E.g. I’ve seen comments claiming that BioShock is in many ways an inferior remake of System Shock 2.) So I was really looking forward to playing it; of course, I didn’t expect it to be as smooth an experience as BioShock, given its age, but I did fine with System Shock, which is even older.

As it turns out, I most emphatically did not do fine with System Shock 2. Not that I regret having given it a try, but I’m glad I gave up after going through the first two levels: it simply wasn’t working for me. Which is too bad, because it meant that I didn’t get to really experience the SS2 version of Shodan, or the lure of The Many, but trying to finish it would have driven me crazy.


I didn’t realize quite how much of a kitchen sink game System Shock 2 is: it’s got significantly more going on than either its predecessor or successor. There’s a skill tree that’s initially presented as a class system but where you quickly learn that you can cross classes; there’s a psi system; weapons degrade; inventory turns out to be even more pressured than its predecessor but with a (hidden to me until I stumbled across it in a FAQ, though maybe I missed something) way to expand it slightly by leveling up; there’s this chemical thing for unlocking buffs; and probably more variables that I missed completely. And all of that is on top of its predecessor’s FPS-combined-with-role-playing-inventory gameplay and its story told through environment, audio logs, and orders through loudspeakers. (With hallucinations added into the mix this time!)

So way too much stuff to be a focused game. Which is fine: I wouldn’t want all games to be that way, but I’m all for art that turns an ungainly collection of concepts into something unexpectedly magnificent. The thing is, though, I need to be able to actually play it without driving myself crazy.


I started off on easy (as I do in games like this), and I selected the psi path. I figured I’d be able to freeze enemies with the power of my mind, and I’d be able to whack them to death with a lead pipe. And, indeed, the lead pipe was there, as expected; what wasn’t expected was that the lead pipe was much less effective than in either System Shock or BioShock. That might not be a big deal, since I could freeze my enemies, except that freezing enemies took up psi power which didn’t autorenew and whose ammo is a more limited resource than ammo for standard weapons. And, when I was encountering enemies at the start, I couldn’t (if I’m remembering correctly) even fire standard weapons, because I would have needed to spend some experience at the start to level that up, and I’d spent the experience on other stuff.

So, basically, it felt like I was being set up for failure right from the beginning by making what seemed to me (what still seems to me in the abstract) to have been a perfectly plausible set of choices in my initial powers. Maybe I’m missing something there; certainly if I were better at playing FPSes on PCs then I would be better at dancing around enemies. (Though I get the feeling that the controls in this game are a lot clunkier than in normal FPSes; I missed when swinging with the pipe a lot more than I’m used to.)

Having said that: this being a Shock game, dying wasn’t actually so bad. There were vita chambers to revive you, or saving and loading was fast enough, too. So I was optimistic that I’d start enjoying it more as I made it through the first deck: I leveled up so I could shoot guns, and it really wasn’t that annoying by the end. I wasn’t actually enjoying it too much, and I was actively offput by having to shoot squeaking monkeys, but still: serviceable enough, I felt like I was starting to get control of the game a bit and get past my loss aversion.


And then the next level started off by putting me in a radiation area: no getting comfortable here, and not just being uncomfortable because of narrative and general spookiness, uncomfortable instead because I’m going to feel like I’m always about to die even if I’m playing at easy. But it wasn’t too long before I unlocked the next vita-chamber, so I could relax again.

Except I couldn’t. One big difference from its predecessor is that System Shock 2 splits each deck into multiple sections, and vita chambers in one section don’t work in another. So I ended up having to go through a part with a new, significantly tougher robot enemy and where I couldn’t freely respawn. This meant that, instead of a grind of running through levels, killing some stuff, dying, and getting revived, and making a bit more progress (though not as much as I would like because some enemies respawned as well), I instead was reloading save games all the time and looking on nervously as what seemed like a very generous number of health packs disappeared surprisingly quickly.

I made it through that deck, started the next one, and decided that I just didn’t want to deal with the game any more. So I stopped.


Not what I wanted out of a game. There’s probably interesting narrative there, but it wasn’t letting me get to it. There’s probably interesting systems there, too, but that wasn’t what I was in the mood for, and the game wasn’t structured in a way to let me play with those systems. (Our May VGHVI Symposium was FTL: I died all the time in that one, too, but that game was set up to let me learn the systems by running another experiment every hour, so I never had the frustration of feeling that my initial build had set me up for failure, or of wanting to reload because otherwise I wasn’t sure if I’d get to the next bit of narrative.)

On to BioShock next. Maybe I’ll try that one on normal instead of easy: there is something that I would enjoy in the systems of these games, and that game showed that it understood what I was asking for when I did play in easy, so maybe it would also be more understanding if I express willingness to grapple with those systems? We’ll see…

blank screen starting octgn in wine

May 4th, 2014

I set up OCTGN on Wine on a new computer in preparation for this week’s VGHVI session; I was following these helpful instructions, which have worked for me in the past.

Unfortunately, I ran into a weird problem: OCTGN would start with its normal “Loading OCTGN” screen, but then instead of showing me the normal game window when that was done, it would show me a black rectangle.

I tried it out on the other machine when I’d previously had OCTGN installed that way, and I got pretty much the same symptoms. Though on that machine, it took longer, since it spent some time updating OCTGN, and there was a popup that briefly showed up that gave me a clue as to what was going on.

So, the short version: if this happens to you, edit the OCTGN settings (probably in ~/OCTGN/Config/settings.json) to set the property IgnoreSSLCertificates to true. Here’s a line to add:

"IgnoreSSLCertificates": true,

(or, if you put it at the bottom, then put the comma at the end of the previous line instead of that line).

Once I did that, OCTGN came up as expected; I haven’t actually tried playing a game, but I’m assuming that that works. (Though I brought my VirtualBox Windows installation up to date just in case…) But I figured I might as well write a post about it, in case it helps anybody else googling for solutions to that problem.


May 1st, 2014

Last time, I talked about free to play; a phrase I often hear linked with the term “whale”. The prototypical use goes something like this: free-to-play games make most of their money from a small proportion of whales, people who spend thousands of dollars that they can’t afford in order to buy useless items in those games because they’ve gotten addicted to the game through techniques borrowed from the gambling industry.

And yeah, there’s some amount of truth to that. I’ve worked on a game that got some portion of its revenue from items that were intentionally priced in a fashion that makes it hard for people to calculate the price of what they’re trying to buy, and that did so using sporadic reinforcement techniques. That is not a good thing to do; I don’t know if the company I worked at intentionally borrowed those techniques from gambling techniques, but it wouldn’t surprise me if there was some sort of such borrowing in the lineage somewhere.

But that amount of truth is linked to a lot of other assumptions that I do not agree with, and that I think are misleading and harmful to the discussion.


I think the term “whale” comes with a lot of baggage, so here’s my attempt to define it in a way that avoids that baggage: “whales” are people who spend a disproportionately large proportion of the total money on a product, enough to significantly skew the entire profit structure for that industry away from they average buyer. I’m intentionally not mentioning free to play, gambling, or addiction: whales are simply your best customers, whatever that means.

And I cannot see why the existence of whales is a bad idea. In order to minimize free-to-play game associations, let’s move away from games entirely. There are many people in this country who have played violin at some point in their lives (usually, I suspect, in school orchestras); my daughter is one of them. (As am I, for that matter!) She started taking violin lessons several years ago; she’s one of the better players in her school orchestra, in the top 10% as measured by her seating position.

Lots of people who have played violin have had that experience only through school: their lessons are in the school orchestra, they borrow instruments from the school to play on. (You might call those violinists free-to-play violinists, were you so inclined; I don’t think I ever took violin lessons outside of school, and I played on a hand-me-down violin, so I was one myself.) But many people pay for lessons; we paid around $200/month for our daughter’s lesson, I’m sure there are many teachers who teach for cheaper (especially in places other than the Bay Area), and there are also ways to spend a lot more money on your learning. (Tuition at Juilliard is currently over $36,000 / year, and there are also summer camps where younger students can spend thousands of bucks at a pop.)

As for the instrument, you might start out renting a cheap violin while you’re getting started or buy a violin that costs, say, $200. As you get more interested, you’ll get to where you can appreciate and benefit from better instruments; our daughter’s latest violin cost around $1000, for example. And, having spent some time in violin stores, I can assure you that they would be happy to sell us instruments that cost us tens of thousands of dollars, and professionals might consider spending hundreds of thousands of dollars on an instrument.


Who are the whales here? I don’t know for sure if that term applies at all, because I don’t know what the profit structure of the violin industry is like; I don’t even know exactly how to measure profit in a context of individual teachers giving private lessons. (It’s easier to measure it for violin stores, I’m fairly sure, but there too I have no idea what the numbers are.) But there are certainly lots of people who play violin who haven’t spent money on it, or people who have spent a little money on an instrument but haven’t taken years of private lessons.

So I don’t know if that makes my family whales, or if it makes sense to reserve that term for, say, students who end up getting into elite music schools, or even if the price structure is such that the term doesn’t apply. But that’s the lens I’m looking through when I read posts about whales that it’s immoral to allow players to spend thousands of dollars on their game: I really don’t think the violin industry is being immoral by letting us spend thousands of dollars on it.

And, as somebody who loves video games: shouldn’t they be worthy of spending thousands of dollars on? Of course, many of us do that in aggregate: there are people out there who are happy to spend that much money on a gaming PC, or to spend a thousand bucks a year buy buying a couple of $50 games a month. But shouldn’t we seek out individual games that are worthy of that sort of love? (Which I have, I’ve certainly spent thousands of bucks on go in various ways…)


Like I said above: there’s some truth in the complaints about whales. There’s something dirty about intermittent reinforcement combined with obscured pricing, for example. Though maybe there’s something dirty about intermittent reinforcement in general, I’m not sure random item drops are great even if you’re not spending money on them.

And, while I didn’t call it out above, another part of that complaint is about pay to win. This is certainly something I try to actively avoid: I’m sure Magic is a great game, but the last thing I want to do is spend money trying to get good cards. (Which is of course in part about the intermittent rewards.)

But the violin analogy makes me see pay to win in a good light. Some violins are better than others, and that quality has a noticeable correlation with price. And that’s a good thing: if we could magically duplicate the best violins, then that would probably be a better world, but we’re not in that world; in the absence of that, I’m glad that capitalism gives incentives for people to make better violins.

Moving from non-digital art to non-digital games, not all bicycles or sets of golf clubs are made equal, either. Here, though, the rationale for a quality race isn’t as clear: it doesn’t necessarily improve golf as a sport to let players hit the ball farther and farther simply by spending more money. But it also wouldn’t improve golf if pros had to play with a $200 set of clubs. So it’s probably best for the sport to have a quality cap imposed but to have the cap be generous enough to let the best players show as much of their art as possible.

Or then there’s go as an example: you can play go as well on a $20 go board as on a $50,000 one. So there quality is a purely aesthetic question: I’m glad that aesthetic range exists.

Digital games are quite different in this respect, though: in a digital golf game, the best set of clubs and the worst set of clubs cost the same to produce. So yeah, paying more money for a simple stat upgrade isn’t good: it makes the game ecosystem worse. But it misses something important about the golf example: having access to more flexible tools lets better players express more of their art. So I’m a fan of systems like League of Legends or Netrunner where the game developers provide a basic ground level experience to everybody but also go out of their way to expand the possibility space in interesting ways and let you have access to that by paying for it. (And paying for it in a predictable way, unlike Magic.) As a game player, I want games to be as rich as possible; paying money to have access to more options within that space can be a very good tradeoff when executed well.

And hats or skins are the analogue of fancy go boards: that’s not pay to win at all, and I think their existence is great. Or at least great when it’s not linked with random item drops.


Pay to win points in another direction, though: paying to win is spending money to trade an intentionally worse game experience for one that is, at least superficially, intentionally better. And we see this in single-player games as well: it’s the energy gates in Facebook games that you can get past by either waiting, spending money or spamming your friends.

Games asking you to spam your friends is bad. I’m actually fine with the choice between waiting or paying to advance, within limits: that’s a form of paying for value. Still, as a player it’s certainly nice to pay a flat fee for a game and be able to explore it all I want.

But there are also a lot of fixed price games that still have gates! It’s grinding, it’s narrative games that throw wave after wave of enemies at you before letting you go through the story. And while I don’t think having games let you pay $10 to skip the combat in your favorite RPG is the best solution, not having the option to minimize or avoid the combat also isn’t great! I’m not advocating for pay to win in those contexts, but the questions that it raises are important ones: it at least asks what it would mean for a game to be responsive to different players’ different desires.


So yes: I don’t want to spend time in an ecosystem where pay to win is artificial scarcity that actively harms the structure of the game. But not all pay to win is that: if “pay to win” is either a difference that expands the possibility space for better players or that gives better aesthetics, than I like it a lot more. And what I like the most is a focus on the quality of the experience: that’s something that I want all games to care about, no matter their pricing structure.


April 20th, 2014

Threes! is both adorable and, I suspect, pretty good. A similar sort of combining mechanism to Triple Town, but with shorter games that fit into my day better, and a bit less aggressive randomness. (I gather a sign of being good at Triple Town is enjoying the bears, finding that you get a lot of money out of them; I’m not there yet.) I’m fairly sure there are still several layers of strategy/tactics that I haven’t yet uncovered, though it’s a little hard to say from where I’m sitting.

Not much to say beyond that. It’s, uh, a metaphor for code hygiene? I wish the loading times weren’t so glacial? The art / music / sounds / motion really is adorable? (Except that the last upgrade seems to have broken sounds / music for me, at least some of the time.) It’s good enough at sucking up time that I should probably move it off of my home screen?

free to play

April 10th, 2014

There was a fair amount of discussion of “free to play” at this year’s GDC; most of it negative (at least in the discussions I was part of), often extremely so, and often linked with the concept of “whales”. There’s some amount of that discussion that I agree with, but more of that discussion (and the moral judgments that come with that discussion) that I’m uncomfortable with, so here’s an attempt to tease out what I think.

One basic point of uncertainty I have is what people mean by the term “free to play”. For example, at some point I was talking with Jorge about The Walking Dead; you can play the first episode of each season for free, so does that make it free to play? On a straightforward reading of the term, I would argue that it does, but within the cultural context of the discussion of GDC, I think it doesn’t. Or at least that’s not the type of game that the GDC zeitgeist wants you to envision when you bring up the term: it wants you instead to think of games like Candy Crush. (Or League of Legends, which the zeitgeist likes rather more than Candy Crush.) Is there a way of thinking about the concept that illuminates those differences?

The term “free to play” strongly suggests that we should talk about pricing models in general. So, in hopes that that sheds some light on what the term might mean, here are some possible models you can use to think about how to set “the right” price for something:

  1. Price based on cost: set the price based on the costs that go into developing / maintaining the game, plus enough of a profit margin to get by.
  2. Price based on value: set the price based on how much value the purchaser of the game will get out of it.
  3. Price based on marginal cost: set the price based on the cost it takes to produce / maintain one extra copy of the game.
  4. Price based on misdirection: get as much money as you can from players, without concern for the players or the long-term health of your relationship with the players.

It seems to me that a lot of the discussion presumes that free-to-play games always fall into the fourth model: the assumption is that providing games for free is inevitably the first step in a misdirection play. It also seems to me that the third model is a fairly major player in the discussion; and it’s an even larger player in the (somewhat related) discussion around game cloning, because cloning is closely tied to decreasing the marginal cost for producing a game. And pricing based on marginal cost combined with a digital environment is really scary for GDC attendees: these are people whose livelihood depends on making games, so their jobs will vanish if price = marginal cost = $0.


The thing about that third model is: in a lot of contexts, it’s the most natural way to price products. If a product is a commodity, then multiple companies offer functionally equivalent versions of that product. And so people looking to buy that product will pick the one that is cheapest; so companies will struggle to offer that cheapest price, which gives them an incentive to push the price down as low as possible while it still being worthwhile to sell the product at all. In a physical goods context, what this frequently means is trying to lower your cost of production, leading to a pursuit of economies of scale and other production efficiencies; that’s brutal enough, but it’s even more brutal in a world of electronic distribution, where the marginal cost is a fraction of a penny.

But, as much as it sucks to be a game developer in that position, there’s nothing inherently immoral about that situation. As a consumer, I am very glad that most of the items that I purchase are commodities: that when I walk into a grocery store, I don’t have to worry about the exact value to me of a can of tomatoes or the exact cost of production for that item. Instead, I get a lot of benefit from the fact that there are a bunch of companies out there trying to win the commodity sales war, by finding more and more efficient ways to produce tomatoes to sell them to me for cheaper and cheaper amounts of money.

Don’t get me wrong: I realize that this commodity war has real human costs as well. So in particular, I support measures like minimum wage laws and environmental protections that lower those human costs (especially if they make them explicit to encourage competition in lowering them, e.g. carbon taxes), even though those measures may have the effect of increasing the marginal costs for all the producers of the goods and hence to me. But I’m also really glad that I live in a world where most of what I need for my daily life is a commodity: it raises standards of living enormously.


This doesn’t mean that I support cloning in general: I suspect that that is one of those areas where artificially putting a floor on marginal costs is useful. For example, I’m not the biggest fan of copyright laws in the world, but if pressed I’ll admit that giving protection from copying an entire piece of software for a handful of years is as good an idea as I can think of. And I would never argue against anybody who has enough pride in their craft to be unwilling to clone. But I also think that some amount of cloning is extremely healthy: there are a lot of first-person shooters out there, there are a lot of match three games out there, and while those all look like clones from a distance, I’m glad that there’s enough room in the design space to allow Bejeweled, 10000000, Puzzle Quest, and Triple Town to all coexist.

So for me, the best solution to cloning is: find ways not to be a commodity. Which I realize is trite, even insultingly flippant, but I don’t have any other suggestions to offer that work with economics as I understand it. And this solution works in non-electronic contexts, too: sometimes, I just want a random can of tomatoes, but sometimes I want something that will taste noticeably better or work better in some particularly culinary context, which sets up the possibility of getting out of the commodity space. Or at least sets up the possibility of market differentiation: there’s still going to be some amount of commoditization within each market segment, but if you can find a small enough segment to work in, commodity effects will noticeably decrease.

Also, before I leave the topic of pricing based on marginal cost, I want to link it to the fourth pricing model: because pricing based on marginal cost often turns out to mean having the perceived price be based on the marginal cost, when the actual cost can be higher. In the electronic goods situation, this means that something is labeled as free but has hidden costs in terms of time, in terms of advertising, in terms of monitoring. Whereas for physical goods, two cans of tomatoes may have the same sticker price on the shelf, but one of them may have higher costs in terms of damage to the environment when producing it, damage to workers’ livelihoods while producing it, damage to your physical health from consuming it. So yes, pricing based on marginal costs can be linked to pricing based on misdirection; of course, any of the other pricing models can also be linked with pricing based on misdirection, but if we’re talking about free to play, it’s hard to imagine how a producer motivated by profit (as opposed to, say, one motivated by sharing) will function effectively in a zero-marginal-cost commodity context without some amount of misdirection in pricing.


At lot of the people I see advocating against free to play are advocating for the first model: they want a world where buyers pay thirty or sixty or whatever bucks for a game, where sales of quality games within a genre aren’t crazy to predict, and where you can staff dev teams accordingly. And I can certainly see why most of the people at GDC would like that first model: they know they’re not likely to get rich off of a game, but they want to make a decent living off of their work.

There are two problems with this model, though. For one thing, speaking as a person who plays games: why should I pay $60 for a game if I don’t know if I’m going to like it? I’ll even ask why I should pay any money for a game if I don’t know if I’m going to like it, but if it’s cheap enough, I can’t say that I shouldn’t pay a few bucks out of curiosity; but I’m a lot more dubious at, say, traditional console game prices.

But, more importantly: according to my admittedly naive understanding of economics, this model simply doesn’t fit the real world. There’s no reason why the amount that people are willing to spend on an item should be directly tied to the cost of the item: if your competitor is willing to sell a comparable item for less than your cost to make it, then tough. Fortunately, that can cut both ways: if you can either increase the item’s value in a unique way or decrease your production costs in a unique way, then your profit selling the item can increase out of proportion to its costs! But, either way, it’s not an accurate way to think about the world.

And it is my feeling that this model also comes with a fair amount of misdirection. In a world of fixed non-trivial priced games, players need ways to decide whether to buy a game without playing it. This leads to large amounts of advertising, it leads to a games “press” that’s almost entirely about getting players excited about upcoming games to the benefit of publishers, it leads to attempts to constrain the number of games that are being talked about / actively played in a given time period (that’s part of fighting against commoditization), it leads to games that wear out their welcome so players are encouraged to move on to buying something else, it leads to design based on marketing bullet points rather than lasting value.


And speaking of lasting value, let’s talk about the second model: pricing based on value. This is my favorite model: when I’m buying a product, I’m happy to spend money if I feel that I’m getting something for that money (at least if my budget is doing well!); if I’m selling a product, I feel great if I’m getting rewarded for making the product more valuable. And, unlike the first model, this model actually does work: as long as the product you’re selling isn’t a commodity, then you absolutely can price it for significantly more than the marginal cost if your target market thinks it’s worth it. (Witness Apple’s success in keeping huge margins for its products, or Nintendo’s ability to create a single version of Mario Kart for a given console and sell it at a relatively high price for years.)

I mentioned The Walking Dead above; in my mind, it’s a great example of this model. Before you’ve started playing the game, it hasn’t proven its value, so they let you play the first episode for free. If you’re still not sure, then you can buy the episodes one at a time, dropping off whenever you decide the game isn’t worth it. If the first episode convinces you that the whole season is valuable enough to pay for, then the developers let you show your appreciation of its value by paying for the rest of the season sight unseen at a slight discount.

Android: Netrunner, my current obsession, is another example. It is admittedly not free to play, so it requires a leap of faith from the player at the start. But once you’ve decided to play that, the developers will continue to attempt to provide value by producing expansion packs, and it’s up to players to decide whether those are valuable enough to purchase. I basically think of the game as one with a $15/month subscription fee; and in my mind, it’s absolutely worth it. I don’t play League of Legends, but my understanding is that it’s got a similar dynamic, albeit one more tilted toward the player: you can play a huge amount for free, but once you get sucked in, there are many ways to pay money, to let you get more value out of the game (by letting you focus on a champion you like, to get a skin that you enjoy looking at or feel represents yourself better). Or, for that matter, to pay money just because it feels right to give money to a company that has given you hundreds of hours of value: whenever a game sets up that dynamic, it’s really doing things right.


I’m already a couple thousand words in, so I think I’ll defer my discussion of whales to another post. I guess my conclusion so far is:

  • Pricing based on value can work, and when it works, it’s great for both players and developers, creating games that are worth playing for years.
  • Pricing based on misdirection sucks: try to avoid doing that. (It’s the one aspect of my work at Playdom that I actively felt bad about: I felt that a lot of our pricing was just fine, but we had this concept called “crates” that’s based around people’s brains not being wired to understand probability.)
  • I don’t see how high fixed pricing works in a digital world without strict gatekeepers: otherwise, it gets swamped by commoditization forces.
  • Commoditization forces are scary for developers, and they even scare me somewhat as a player.

what are apple’s language plans?

April 3rd, 2014

I spent my commute home today listening to John Siracusa and Guy English talk about how Objective C is getting old in the tooth. A topic, of course, that Siracusa has addressed a few times; as you would expect, it was a thoughtful discussion, I’m glad I listened to it.

And I really am curious about the answer there. I’m no Objective C expert, but it seems like it’s going to be an issue at some point in the not horribly distant future. But it seems like a lot of the standard solutions that I’m used to would be potentially problematic from the way Apple seems from the outside to approach things: in particular, a solution that bakes in a VM with full garbage collection into its foundations seems a little unlikely to me? I’m not aware of existing new languages that feel to me like they’d be a great fit for Apple, but that doesn’t mean much, I haven’t kept up well with modern trends in that area. Which could mean that Apple will do an Apple-style thing and invent their own solution; the question then becomes whether they have the language design chops to do an Apple-quality job of that. (Which I’m not at all convinced of.)

I dunno. The next time I’m on the job market, I should see if I have contacts over there who could hook me up with people who might be working on that. It could be a very interesting way to spend time, and one in which my own particular skills could find a useful role?

the wind rises

March 23rd, 2014

The Wind Rises is, I suspect, a very good movie; I won’t end up loving it in the same way as Spirited Away, but I probably will end up loving it more than Miyazaki’s films since that one, and the fact that it takes a less fantastical approach to its subject matter of course comes with strengths. I don’t have much to say about it as a whole yet, though I quite liked Ghibli Blog’s take on the movie.

What I do want to talk about, though, is one aspect of the surrounding discussion. The review that showed up in my local paper ends by saying that “But not addressing the way it was used and the war the country started so that it could use it just reminds us that Japan still dreams of denial, as far as World War II is concerned.” Or, on the blogger side, we have Tim Bray saying “But yeah, there’s a prob­lem. What we have here is art that’s all about glo­ri­fy­ing and ro­man­ti­ciz­ing peo­ple who built killing ma­chines that were put to use by a fas­cist gov­ern­men­t.” So: should I consider the movie’s treatment of that issue a problem or not?

Having just finished rereading the Nausicaä manga, I’m inclined to give Miyazaki the benefit of the doubt. In fact, I’ll tentatively propose that the movie’s refusal to directly address that question is an active strength: it made it a lot harder for me to pat myself on my back and say that I’m one of the good guys, unlike that horrible person in the movie.


Any discussion of the morality of that situation has to step away from the details. Yes, of course Japan did horrible things in the years leading up to World War II; yes, the Zero fighter was built in service of those horrible things. So it’s easy for me to say that a Japanese filmmaker should abase himself in shame for what his country did, that a Japanese airplane designer should have had the courage to say no when asked to build tools of war. But it’s easy for me to say that because I’m sitting in comfort seventy-five years after the fact in my home country, the country that was on the winning side of the war. Would I make the same claim if the roles were reversed, if it were me or my country we were talking about?

Because, to be clear: it is not at all difficult to find parallels in actions that my country has taken. Over those intervening seventy-five years, we’ve invaded one country after another, overthrown governments we don’t like and installed puppet regimes to do our bidding in flagrant disregard of basic notions of democracy, of the rights, desires, and even lives of the people who actually live in those countries. I’m not a student of history, but it is not at all obvious to me that Japan’s treatment of China was any worse than the United States’ treatment of Vietnam or our treatment of Latin American countries. And we’re the country that developed and used the atomic bomb, and we continue to have enough bombs in our arsenal to destroy humanity.

And of course there are plenty of American movies about the Vietnam War that don’t present our actions there as heroic. But what’s interesting to me about The Wind Rises is the oblique angle that it takes to the war. Jiro isn’t a soldier: he’s an engineer and designer, he’s doing work that he loves and is brilliant at, he’s doing work that is unquestionably deserving of love. And he’s doing that work in the service of his country, in a context where many around him are unable to find work and living in poverty. (The movie’s title reinforces that latter aspect of the situation: the wind is rising, we must try to live.) I would like to be able to say that, were our situations reversed, I would make different choices from Jiro, but I don’t believe that: the choices that I’ve made in my life so far give ample evidence that I don’t stay away from work in contexts that are morally questionable, especially if that work is work that I love, am good at, and can make money from.


As is obvious from the above, I think a lot of what the US military does is evil. So you would think that I would stay away from the military. These days I do, but I haven’t always. The summer after my freshman year at college, I worked at a defense contractor on a military-funded research project. (We were building a verified Scheme compiler, it was really interesting!) And most of my grad school was funded by a Defense Department grant. In both cases, I got to do something that was fascinating, profitable, and that I was good at; that combined with the lack of direct focus on military applications was enough for me to ignore my misgivings about military ties.

And maybe that was even a perfectly reasonable choice from an ethical point of view: better for the military to spend money on me than on something more closely tied to killing people? Certainly military-funded research has led to a lot of good: the internet started off as a DoD-funded project, after all. But I think that’s largely after-the-fact rationalization of me doing what was most pleasant for me. It’s nothing compared to the gravity of the choices Jiro had to make: if he wanted to do what he loved at all, he had to accept military work, and he was surrounded by people whose lives were threatened by not being able to work. Whereas if I hadn’t taken DoD funding for grad school, then I would have gotten NSF funding instead, I would have had the exact same education, and my stipend would have been maybe 15% lower; this hardly compares in terms of hardship.

I don’t want to present this as too much of a slippery slope: I’m pretty sure I would have thought a lot harder about those choices if they’d involved working directly on military work. But that in turn points at one of the major strengths (?) of modern capitalism: it leads to systems that are very good at finding people’s moral limits and getting as much benefit from those people as possible given those limits. If you want to be on the front lines of fighting evil in the name of your country, the military will be happy to give you a gun and ask you to do that. If you support the cause but don’t want to be so directly exposed (whether for reasons of danger or of not wanting to be confronted with the consequences of your actions quite so directly), you can help at a distance: you can pilot a drone, you can work in a support role. If you want to be ready to support your government if necessary but would prefer to not have your home and work life disrupted excessively otherwise, you can join the National Guard. If you want to use your brain to fight people, or just want to use your brain to solve interesting problems and don’t really care where those problems come from, the NSA will be happy to employ you. If you want to work on generally applicable technological problems and don’t particularly care who pays the bills, then you’ll end up where I ended up, opportunistically getting DoD funding to do what you want.

And what this leads to is a system where the military can get a lot more power, can be a lot more effective than it would be if people had to make a choice up front as to whether or not they’d be willing to pull a trigger and kill a person standing in front of them: by putting their fingers on the scales, the military can weight the system to flow in their direction. It’s similar to the way funding by the rich and corporations biases the political system: I’m sure most politicians would recoil at the notion of simply letting their votes be bought, but if pro-corporate candidates get more funding than anti-corporate candidates, then the whole system flows in a pro-corporate direction even if no candidate’s behavior is changed by the presence of funding, because the pro-corporate candidates are more likely to survive. (And I bet that an awful lot of political candidates’ willingness to engage in quid-pro-quo behavior rises as they’ve been within the system longer, too.)


I haven’t, as far as I know, accepted military funding for a decade and a half now; that doesn’t mean that I’m not still implicated in ethical choices, though. Every week, there’s another story about how the tech industry is actively hostile to women, to minorities. Or if it’s not that story, then it’s a story about privacy, how we’re constantly monitoring our users in order to make them more attractive to advertisers. And I just got back from GDC, my yearly exposure to the arguments around monetizing. I’ve seen all of those arguments from the inside of companies; generally I’ve ended up working in a way that puts me on the wrong side of them, because I end up on the wrong side in a way that I only find mildly distasteful, and working on something interesting and profitable turns out to matter more to me than mild ethical discomfort in practice.

And then there are other arguments that don’t even rise to my conscious attention but that are perhaps even more important. Climate change seems like it’s probably a bigger threat to human existence than anything other than nuclear weapons; as somebody who works on server software, I’m part of a switch from physical goods to goods over the internet. So I’m pretty sure that the work that I do is directly relevant to climate change, but I have no idea whether it’s relevant in a good way or a bad way! Maybe reducing transportation costs means that it’s a net positive; maybe server energy usage means that it’s a net negative. But, either way, it’s very easy for me to not think about the issue at all.


Miyazaki cares a lot about these sorts of big questions around war, around the environment, around survival: see Nausicaä, see Castle in the Sky, see Princess Mononoke. In those three movies, there’s a clear bad guy to fight against, and it’s easy to put ourselves in the place of somebody fighting against that bad guy. With The Wind Rises, he raises those same questions (referring to them even in the movie’s title), but instead encourages us to empathize with somebody on the other side of that divide. That’s gotten me thinking a lot more than any of his previous movies have; and it has me realizing that it’s not a divide at all.