[ Content | Sidebar ]

brenda romero: jiro dreams of game design

July 13th, 2014

It’s months since GDC, and I’m still trying to unpack my feelings about Brenda Romero’s Jiro Dreams of Game Design talk. Or maybe not so much my feelings about it—it’s an excellent talk, no question—but my emotional reactions to it. Her talk confronts concepts that I care about (greatness, team structure, creation) in contexts that I care about (games, food), leaving me with immediate reactions to almost everything she said, but immediate reactions that were frequently in conflict, and with me quite sure that there’s a lot to think about beneath those immediate reactions.

I watched it again last night; I’m still not sure what I think, other than that I’m now glad I’ve seen it twice! But, trying to put together some thoughts:

Greatness

She talks a lot about wanting to be great, and about the effort necessary for that. And this is where a lot of my insecurities with respect to the talk come in. Because, of course, there’s a part of me that wants to be great: who doesn’t want to be great? In the abstract, after all, it sounds, well, great. But, when it comes down to it: I am not behaving in a way that has led or will lead to me being great at anything.

Don’t get me wrong: I am egotistical enough to believe that I’m pretty damn good at some things, and even that I maintain a fairly high level of standard (relative to an appropriate baseline) at a fairly wide range of things. For example, I’ve largely made my living as a programmer for the last decade, and I’m pretty sure that I’m a noticeably better programmer than most professional programmers.

But I’m equally sure that, in an important sense, I’m not a truly great programmer. There’s nothing wrong with this, and for that matter my bar for greatness in that field may well be abnormally high: but there are significant ways in which I don’t meet that bar.

And her talk pointed at a few reasons why that might be. One is that I’m not quite obsessed enough. She talks about thinking about games from when she wakes up to when she goes to sleep; I think about programming quite a bit, including at odd hours, but it’s not that same sort of all-dominating passion that she projects. Another is that I don’t put in the hours; that’s a related concept but not at all an identical one, I’ll come back to that below.

Also: I don’t feel creative enough. Now, I’m not sure if I think that’s actually necessary for greatness, and for that matter I’m not sure how much Romero thinks it’s necessary for greatness. But it feels to me (and this goes way back, it’s not just my most recent decade) that I’m abnormally good at quickly coming to grips with others’ ideas and using them in productive ways, but there’s a certain seed of novelty that I’m not particularly good at.

Or, to put that last paragraph another way: I can be a quite good craftsperson. And that’s important to me, and for that matter it’s important for greatness. I was about to write: but maybe something’s still missing there? Now that I type this out, though: being a great craftsperson isn’t a contradiction in terms, it’s just a quieter sort of greatness.

So, I guess, if I were going to be great, that’s the sort I would be! But I still would need more passion and to put in more hours.

Actually, rereading this section: I think there’s something wrong about my angle here. What’s important in this context isn’t people being great, it’s works being great. And Romero’s talk is about great works, not (or at least not just) great people. When she raises and rejects the Triad of Constraints, for examples, she does so in the context of producing a great work. Hmm.

Teams, Control, and Responsibility

As is obvious from the talk’s title, Romero brings in food metaphors, metaphors from chefs and kitchens. But Jiro isn’t the only chef she talks about; in particular, she talks about Gordon Ramsay several times, and this was the part of the talk that I had the strongest negative emotional reaction to. Some quotes from that portion: “He had to get all these people to do what he wanted them to do”; “They screw up and he’s the one who’s going to get blamed”; “Screw it up? People remember YOU”; “Control your team or your team controls you”; “My standards, my rules, my kitchen”. (Those last two are Romero quoting Ramsay, I believe the others are her description of what she saw.)

This is a mindset that I have zero interest in: I want nothing to do with command and control, and I want nothing to do with team structures consisting of one guiding light and other people whose job it is to implement that person’s directives. And there’s an undercurrent of fear mixed into that egotism that I think is unwarranted on both parts: I simply have no idea who the chef is in, I believe, any of my favorite restaurants. I do not, admittedly generally patronize restaurants that have been awarded Michelin stars, but I’ve been to one or two, and I don’t think that would make a difference in my awareness of the chef’s name unless the chef decided to engage in self-promotion. For games, it is more frequent (but by no means universal) that I can name the lead designer of my favorite games, but even so: my focus is on whether the game is good, the designer is an afterthought.

So no, people won’t remember you, they’ll remember your work. And not your (singular) work but your (plural) work: the work that the team that you are part of produced. As I belatedly said above: great works are what’s important, great people are a secondary concept.

And yes, great works will (usually!) have a strong, coherent vision at their core. And yes, having that vision come from one person is one way to get there. But what’s important is that the vision is shared and made real by the team; and, as a programmer an in my prior life as a mathematician, I have a lot of experience working with visions that feel stunningly real because they’re a fundamental part of how the world works, or how our shared conception of the world works. So we can all work together to understand what zeta functions really are, we can all work together to understand what simple design really is. And there are tools to let groups of people express and produce works of shared beauty, groups don’t have to invent that from scratch.

Romero does not, fortunately, spend all of her talk embracing the Ramsayan end of this spectrum: I don’t believe, for example, that she thinks that game designers should be dictating the details of how programmers write code to support the game’s vision. And, once I got past my revulsion at the command-and-control aspect of this message, there’s a part of her message that I liked rather more. For your team to produce something great, your team has to do great work, and that won’t happen if you don’t feel responsible for that to happen. In Romero’s narrative, the “you” is a single person in charge of the team, but she also talks about trusting and helping your coworkers to do great work; in my version, it’s everybody’s responsibility, but that most definitely does not mean devolving into greatness being nobody’s responsibility. Instead, we all need to work together to figure out what great work means, to do great work ourselves, and to help others to do great work.

Food, Games, and Software

Romero is a game designer, and she talks about chefs. I am neither; and, listening to the talk made me wonder if those two fields are related in a way that programming, or at least the sort of programming that I do, isn’t. Both of those fields are, in large part, about crafting experiences: in fact, she goes out of her way to talk about how the best restaurants (at least when looked at through a Michelin lens) spend time on the experience of dining there writ broadly, not just on the food. Everything is there because it has a reason to be there, everything is done with intent, with focus, with care and craft.

That last sentence is also characteristic of great programs. But it’s a characteristic that’s only visible from the point of view of somebody working on the program; writing a program that way has an effect on the experience of somebody using the program, but that effect is not direct.

Of course, programs have an experiential component as well, and this aspect of greatness makes sense in that context as well; and that leads to a form of greatness that is directly analogous to what Romero talks about in food and games. (Indeed, given that much of her work is on video games, she is talking in part exactly about this aspect of great software!) But, returning to the previous section on teams striving together for greatness: a cross-disciplinary team striving together for greatness is going to be focused on that experiential side of greatness instead of the external side of greatness, because that experiential side is something they can all perceive and affect.

As a programmer, which do I care about more? I care about them both, of course, and they’re related. By writing great software as measured through the internal lens, I can affect its external greatness in a couple of ways. One is that well-crafted software is, in an important sense, unobtrusive to the user: it responds quickly instead of making the user wait, it is consistent instead of imposing a cognitive load, it doesn’t crash or have bugs. And another is that well-crafted software is responsive to the needs of people who are designing that experience: as somebody like Romero is experimenting to try to tease out the core and then refine the details of a vision, great programmers can help by producing software that they can adapt as quickly as possible (or even provide hooks to let designers adapt it themselves) to actively help that process.

As I said above, though: I’m a craftsperson at heart, and so my focus is internal. But one of the aspects of agile that I’ve internalized well is the desire to write code in order to meet real user needs and desires, and to enable quick experimentation to discover how to best meet those desires. So I would prefer to be part of a company that wants to write great software to deliver a great experience, and if a company fell down too far on either measure of greatness, I wouldn’t join it. Having said that: my bar on what I’m willing to consider on the programmer craft side of things is quite a bit higher than my bar for the user experience side of things.

Obsession and Time

I don’t think I’m obsessed enough to produce really great work. Which isn’t to say that I can’t get pretty obsessed at times: over and over again, I’ll dive into some aspect of learning (frequently but not always software-related), read the most important books on the topic, dive into discussions on the topic, experiment on the topic, and repeat it until I feel I’ve internalized something at the core of that topic. But listening to Romero’s talk (this one and others): I’m not as obsessed with programming as she is with games. Also, my obsession quiets down when it gets to the stage when I feel like I understand what’s going on in some area: my compulsion is to build a world view, not to create. (And, in practice, being a craftsperson is where I end up in the middle.)

There’s another question here, though: totally aside from obsession, how many hours are you willing to put in? Her talk refers to crunch as a fact of life in the game industry; it’s not a fact of life, and I work to make it not part of mine. I’m honestly not sure to what extent my refusal conflicts with greatness: part of extreme programming is the claim that putting in more than about 40 hours a week is actually counterproductive over the medium term, because it dulls the brain and you start writing worse code. It’s clear that there’s a value of N where working more than N hours a week is counterproductive if your goal is greatness, and there are industrial studies suggesting that productivity maximizes out at around 40 hours a week.

And I mostly buy that cap of 40 hours, but not completely. For example, in Chapter 38 of The Cambridge Handbook of Expertise and Expert Performance we have the claim (in a section studying violin students) that

All groups of expert violinists were found to spend about the same amount of time (over 50 hours) per week on music-related activities. However, the best violinists were found to spend more time per week on activities that had been specifically designed to improve performance, which we call “deliberate practice.”

And a cap a little above 50 hours feels more right to me than a 40 hour cap. But in a context of trying to produce great work, it raises some caveats:

  1. That study is about learning, not about producing. Admittedly, any part of great work is going to involve learning even as part of the production of that work; in fact, maybe it’s impossible to do great work without learning all of the time. (Though the converse is certainly not true: novices are learning but not producing great work!) But still: that study is measuring something different.
  2. The part about deliberate practice is super important. To me, this dovetails fairly well with a striving for greatness: part of doing great work involves being deliberate in what it means for work to be great, and Romero discusses the importance of having your colleagues look over your work on multiple occasions in your talk, which dovetails well with the importance of having a coach in deliberate practice. Maybe we should take a lesson from etymology here: great work requires deliberate practice, where by “practice” we return to the meaning of “do” or “act”.
  3. If we go with 50 hours, then I’m not sure what the texture of those 50 hours is going to be, but I’m almost positive that it’s not going to be 10 consecutive hours a day, five days a week. (Or 8 hours a day 6 days a week, or what have you.) Certainly during the times when I was (quite effectively) trying to become an expert in a subject, it would pop up in my life much more broadly than that: for example, Liesl and I had a habit on vacations where we’d be going through rooms in a museum, I’d go a little faster so I’d get a few rooms ahead of her, and then I’d sit down on a bench and read more in one of the math books I was working through. And, actually, when I say I don’t put in the hours, maybe I’m underestimating that: I only put in 40 hours a week (in a standard 8 hour + lunch x 5 configuration) sitting at my job, but I think about my work quite a bit at home, and the very act of writing this blog post is another part of my deliberate practice at getting better at my work. The flip side, though, is: I am not trying to do great work during most of those 40 hours that I do spend at work. So I should probably focus on improving that last bit!
  4. Even if producing and sustaining expert performance is most likely to come from working 50 hours a week, it absolutely does not mean that working 50 hours a week is at all likely to produce expert performance. The vast, vast majority of time, working long hours just means shoveling more crap; I have no doubt that that’s what’s going on almost all of the time when companies ask employees to put in crunch time.

When I put this all together, to me it leads to two recommendations:

  1. First, focus on being deliberate about producing great work. Constantly ask yourself and others how your work could be better, how your processes could be better, what the goals are that you should be striving for in the first place.
  2. Second, listen to your energy level. Producing something even on the small scale that you’re proud of can be very energizing: at its best, doing great work can lead to a feedback loop where you have more energy to do more great work. But once you push yourself too hard, then your work starts to dull; pay close attention to that shift.

I think that second point is where Romero’s obsession gives her a big edge: thinking about games and working on games clearly energizes her. I make a different set of choices, ones that are probably more similar to Johanna Rothman’s.

The Triad of Constraints

When producing something, you want to do it quickly, cheaply, and well; the Triad of Constraints claims that you can pick two out of three at best. To which Romero’s answer is refreshing: fuck picking two, just pick one, make it great.

As she also acknowledges: this can work if you’re producing your own games on your own time; when you’re working as part of a business, telling the people who control the budget that you’re going to ignore speed and cost doesn’t work so well.

I’m not sure that that works so well for me personally as a programmer, though. My focus is on evolving software through as small steps as possible, with an external Product Owner prioritizing the customer-visible features. That means that, at any stage, I want to have written software that’s as good as I can have written in that amount of time, while preserving the ability to continue to do so in the future.

So I’ll alter the triad in the opposite direction, by picking all three. I’m very self-centered, so from my point of view, the cost is generally fixed, it’s my salary, I’m not going to magically produce twice as much work or twice as good work if you pay me more. And I certainly agree with Romero that I want to produce great work. And then the scope is what it is: you’ll get a different product if you ask for the best I can produce in a week than the best I can produce in a year, but in any case you can pick the scope however you want. Or, to put it another way: the Triad of Constraints implicitly assumes that you’re making choices up front instead of evolving; why would I want to do that?

Of course, I’m just punting certain decisions over to a Product Owner; Romero is more the Product Owner herself. That’s the way to approach the control aspects that I discussed above in a way where I’m less dubious: deciding on the sequence and details of user-facing features is an important role, no question.

Works and Creation

She has a comeback to my evolutionary design boosterism: she has no patience for the concept of the Minimal Viable Product, whereas to me it seems like an obviously good step in an evolutionary design.

But I’ve spent my entire professional career on software that is designed to be used and grow over the course of years, even decades. This is very different from a more traditional sort of creative work: where you release a work into the world, let people experience it as a whole, and move on to producing your next work.

And I’m not nearly as convinced about Minimal Viable Products or evolutionary design in the creative work arena. When I’m reading a book, I don’t want to start by reading a minimal version of that book one month, then reading a slightly more fleshed out version a couple of months later, then reading a third version that retreats in some areas based on user feedback and moves in a different direction: I just want to pick up a book and read it. And the same goes for games, much of the time, though admittedly less universally these days.

This doesn’t mean that evolutionary design doesn’t work in a context of polished creative works: you can still produce them iteratively, you can still solicit feedback from a trusted close circle at frequent intervals. And, as she says: “what if I made something as good as I possibly could every frigging day?” That’s one of the lessons she learned from Jiro: he ships every night.

So we’re returning to what I said above: work in small steps without sacrificing quality. I combine this with handing scope decisions off to a third party; she is in charge of scope, and she works in an industry where the scope that you choose for a product when it is released externally is a crucial decision.

Conclusions

Or at least next steps: I like evolution, after all!

One is that I should work harder to be doing my best during the times when I am working on something. If I’m spending the time on something, why not spend the time being focused and doing the best work I can? If I’m not going to do that, it’s probably better to not spend that time: instead, spend the time in a way that lets me get my energy back so that I can focus later.

And the other is that I should seek out greatness more. I’ve worked with one person whom I consider unquestionably great; or at least I worked in a startup that he cofounded, we rarely interacted at all. But, even so: those few interactions were incredibly energizing. (I was talking about those interactions with a friend of mine a couple of months ago, she said she’d rarely heard me sound so excited.) I should try to find more of that; I should try to deserve being around more of that.

returning to bioshock

June 14th, 2014

After my unpleasant experience with System Shock 2, I moved on to BioShock. I wasn’t worried that I might have the same problems with BioShock that I had with System Shock 2: I remembered from my prior experience that BioShock took the Easy difficulty setting seriously (enough so that I was thinking of trying Normal on the replay), and the RPG aspects were dialed down and didn’t allow for the same sort of missteps I’d made in System Shock 2.

As it turns out, though: I stopped playing BioShock after the Medical Pavilion level. Not because the game was too hard (I made it through okay on Normal, certainly more easily than I did with System Shock 2 on Easy), but because of narrative reasons.

 

Which is a pity, because there were two aspects of the game that were flat-out amazing, one grand and one a little more localized. The grand aspect was the setting itself: the idea of an underwater city, the execution of the architecture (both in its original and ruined aspects), the music and sound design, etc. And the localized aspect was the idea of a cubist plastic surgeon: that’s a wonderful concept to build a level around.

I would have loved a game that went all in on those aspects. Given those two elements, probably the most natural way to flesh them out would be as a slowly paced horror game: one with enough breathing room to let you drink in the environment, but that still lets Dr. Steinman and subsequent characters show through in their glory. And, of course, the actual game does contain horror aspects; but there’s just too much shooting of guns or plasmids, too much hacking of turrets and health stations, too many vita chambers for the horror game to have any conviction. Basically: there’s a part of BioShock that wants to be an RPG with class choices, that wants to be Deus Ex, and that part wins over the proto horror game.

Or, indeed, over any other potential realization of the game that would leave you more room to drink in the mood and setting. If only games would learn from Shadow of the Colossus that it really is okay to leave space…

 

Still: that alone wouldn’t have been enough for me to stop my playthrough. What really got to me is the treatment of the Little Sisters and the Big Daddies. I said more about this in my first playthrough of the game, but: the entire treatment of the Little Sisters is awful. When you meet a small child that you’ve never seen before, the two choices that should go through your mind should not be “should I kill this child or should I use this magical shiny thing I’ve been given to perform surgery on the child despite the her screams of protest?” Now, admittedly, this sort of iffiness isn’t without precedent in video games: it’s also the case that, if you happen to find yourself in a strange location and come across a gun, then you should not use that as justification for mowing down everybody you meet! But at least that choice has history normalizing it in a video game context, and at least you’re being attacked so you can reasonably consider yourself to be in a “kill or be killed” situation. Whereas with the Little Sisters, the game forces you to commit child abuse, and then has the gall to present one form of that child abuse as the “good” choice.

That’s bad enough, but it then follows it up with a Big Daddy encounter. And here, the situation gets, if anything, even worse. Again, people: if you’re in an unfamiliar, dangerous location, if you meet a small child wandering around, and if you meet an adult whom that child clearly knows and loves and who is protecting that child (and doing so remarkably capably, given the extreme danger of the environment), then the correct choice of action is not to kill that adult. The correct choice of action is almost certainly to treat it as none of your fucking business; if, instead, you decide to treat this as some sort of clever environmental puzzle encouraging you to figure out how to use the many tools at your disposal to dispatch the protector most efficiently, then you are a monster.

 

So no, I really wasn’t in the mood to go further with BioShock after the end of the Medical Pavilion. I’m willing to consider the idea of playing games where I’m a monster, though honestly I would generally far rather not. I’ve got a lot of respect for what I’ve heard about Far Cry 2 or about Spec Ops: The Line; but those games put you in a much more self-consciously morally complex situation than my reading of BioShock does, and they don’t have the player being actively complicit in child abuse as their main theme. Having said that, the Little Sisters aren’t even the main overarching plot aspect of BioShock; maybe those other plot themes are reason enough to go forward?

I didn’t go forward, so I can’t say for sure, I’m just basing the following on my memory of my first playthrough. But my memory says this: the overarching theme basically comes down to two things. One is a poisonous presentation of father/son dynamics: arguments about whether the father gets to tell the son what to do, or whether the son gets to do whatever he wants, killing the father in the process. And the second is, of course, Objectivism.

And, well, fuck that too. Both of these basically boil down to the same thing: man-children who are fighting among themselves about who gets to have their own way, with the rest of the world as collateral damage. And that fits in with the whole Little Sisters / Big Daddy treatment, too: women and children are subhuman pawns for those man-children to use and dispose of as they wish, and men who try to build relations and families are slightly more worthy of respect (they’re men, after all, and if they’re successful in a role of protector then at least they’re participating in the fight) but ultimately need to be destroyed.

If this were satire, it could be a depressingly biting portrait of certain aspects of society. (Including, I suspect, the AAA game industry; I’ll throw Silicon Valley startup culture into the ring, too.) But it sure doesn’t read that way to me: the game isn’t a pro-Objectivism presentation by any means, but the game structurally buys into enough of Objectivism’s conceptual prerequisites that, well, see above.

 

So: no more BioShock for me. I’m curious about Minerva’s Den, but not curious enough to dip into BioShock 2. (And I’m very glad that people involved in that game have moved in a different direction.) Everything that I’ve read about BioShock Infinite makes me think that that game would drive me crazy as well: a glorious environment combined with way too much shooting and an offensive and hamfisted treatment of narrative themes.

Instead, I went through Monument Valley as a truly lovely palate cleanser, and then started a replay of the Phoenix Wright games. And that was absolutely the right choice.

medium: browserify

June 10th, 2014

There’s one problem with the way I first set up my build system for Medium: I had no control over how the CoffeeScript files were ordered. In languages with linkers, this isn’t a big deal: within a library, the linker will resolve all the references between object files at once. But without a linker, ordering becomes more of an issue.

Actually, in CoffeeScript or JavaScript, it’s not that much of an issue: in fact, for small projects you can get away with ignoring it entirely. It’s fine for methods in one class to refer to another class that hasn’t been loaded at the time the first class is defined: as long as the second class exists by the time the class has run, you’ll be okay. So that means that the only real issue when starting off is making sure your entry point gets run after everything else is loading; that’s a one-off case that’s easy to deal with manually. (You can just inline the entry point code in the HTML file, for example.)

 

Having said that, just clobbering everything together like that felt a little distasteful to me; and there also turned out to be two practical issues. The first is that Mocha, the unit test framework I used (which I promise I’ll talk about soon!), didn’t use the browser model of sticking everything in global variables: it used the Node.js concept of modules. I actually spent a couple of weeks ignoring that mismatch, writing code that worked in both realms by checking to see if the Node.js variables were defined, but in retrospect, that was silly: the point of this blog post is that doing things the right way is easier than that workaround.

And the second practical issue is inheritance: if class A inherits from class B, then the browser really does need to have seen the definition for class B before the definition of class A. To get that right, I needed a dependency structure; and doing that by hand would have crossed the line from silly to actively perverse. So I looked around, and found that browserify (in its coffeeify incarnation) was what I wanted.

 

First, a brief introduction to the Node module system. When you define what looks like a global variable in a Node source file, it doesn’t actually get stuck in the global namespace: the namespace for that file is local to that file. But Node provides a special exports variable: if you want to export values, attach them to that. For example, if I have a file runner_state.coffee that defines a RunnerState class, I’ll end the file with

exports.RunnerState = RunnerState

That last line still doesn’t stick RunnerState in the global namespace: there’s actually a special global object you can use for that, but you generally don’t want to do that. Instead, if another file wants to refer to that RunnerState variable, it puts a line like this at the top:

{RunnerState} = require('./runner_state.coffee')

The return value of the require() call is the exports object for that file, and I’m using CoffeeScript structured assignment to get at its RunnerState member. Once I’ve done that, I can refer to RunnerState elsewhere in that file. (Incidentally, in some situations you don’t need either the leading ./ or the trailing .coffee in the argument to require(), but I found that using both worked best with the collection of tools I was using.)

 

So, that’s the Node.js module system: a nice way to avoid polluting the global namespace and to express your object graph. It comes for free in the Node ecosystem, and all I wanted was to bring that over to a browser context. And that’s where browserify comes in: it lets you write code like it’s Node modules and then it transforms it into a format that the browser is happy with.

To cut to the chase, here’s how to get it to work. Start with the build system from last time. Then install browserify and coffeeify, plus the grunt plugin:

npm install --save-dev browserify coffeeify grunt-browserify

In your Gruntfile.coffee, replace the grunt-contrib-coffee requirement with a grunt-browserify requirement, and replace the coffee block with a block that looks like this:

    browserify:
      dist:
        files:
          'js/medium.js': ['coffee/*.coffee']
        options:
          transform: ['coffeeify']

Also, in your default task, you’ll want to invoke browserify instead of coffee.

 

Here’s the resulting file:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

    browserify:
      dist:
        files:
            'js/medium.js': ['coffee/*.coffee']
          options:
            transform: ['coffeeify']

    sass:
      dist:
        files:
          'css/medium.css': 'scss/medium.scss'

    watch:
      coffee:
        files: 'coffee/*.coffee'
        tasks: ['coffee']
        options:
          spawn: false

      sass:
        files: 'scss/*.scss'
        tasks: ['sass']
        options:
          spawn: false
  }

  grunt.loadNpmTasks('grunt-browserify')
  grunt.loadNpmTasks('grunt-contrib-sass')
  grunt.loadNpmTasks('grunt-contrib-watch')

  grunt.registerTask('default', ['browserify', 'sass'])

Now, if you run grunt, you’ll build the output JavaScript file (js/medium.js in this case) like before, but with separate input files treated as separate modules! Which, of course, means that it won’t actually work until you go back through them and add require() and exports in appropriate places.

medium: setting up a build system

May 31st, 2014

After I set up Medium, the next thing I did was start writing code and unit tests. And I will write about unit tests in a couple of posts, but I want to jump ahead one stage, to a build system, because that was something that required workarounds almost from the beginning and turns out to be easy to set up if you know how.

Because, of course, if you’re using CoffeeScript and SCSS, you need a preprocessing stage to turn them into something that a browser is happy with. If you have a single CoffeeScript source file, then running the coffee command is not too crazy, but what if you have multiple source files? You don’t want to run coffee on each of them individually, and you don’t want to have to load each of the outputs individually into your HTML file (or at least I don’t!). The coffee command actually has a --join argument to handle this, so you can certainly work around this manually, but this is definitely getting to the stage where a C programmer would say “I would have written a short Makefile by now”.

 

In JavaScript land, though, you probably don’t want to use Make; there are various options for build tools, and the one I chose (which seems to be the most common?) is Grunt. To get started with it, you actually want to install the grunt-cli package globally instead of putting it in your package.json file:

npm install -g grunt-cli

This makes the grunt command available, but the smarts are all in the grunt package plus whatever plugins you use. Those you install via npm install --save-dev; a good place to start is

npm install --save-dev grunt grunt-contrib-coffee grunt-contrib-sass

Grunt’s configuration file isn’t in some custom language, it uses an internal JavaScript DSL for configuration. And you can configure it in CoffeeScript, too, which is of course what I did. So here’s a basic Gruntfile.coffee:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

    coffee:
      compile:
        files:
          'js/medium.js': 'coffee/*.coffee'
        options:
          join: true

    sass:
      dist:
        files:
          'css/medium.css': 'scss/medium.scss'
  }

  grunt.loadNpmTasks('grunt-contrib-coffee')
  grunt.loadNpmTasks('grunt-contrib-sass')

  grunt.registerTask('default', ['coffee', 'sass'])

Pretty self-explanatory. (I have a bunch of CoffeeScript source files but only one SCSS file; eventually I may have multiple SCSS files, but even then I should be able to use includes to get a single entry point.) And, with that in place, I just type grunt and it builds medium.js and medium.css.

Of course, it does raise the question of how all those CoffeeScript files get combined into a single JavaScript file and what to do if you want to have control over that combining; I’ll explain that in my next post. But for now, this works as long as there aren’t load-time dependencies between your CoffeeScript files, and it outputs a single JavaScript file to load from your HTML.

 

I actually prefer not to have to manually type grunt each time I want to rebuild: I like to have Grunt watch for changes and build things every time I save. To get this to work, install the grunt-contrib-watch package and add a block like this to the initConfig section of Gruntfile.coffee:

    watch:
      coffee:
        files: 'coffee/*.coffee'
        tasks: ['coffee']
        options:
          spawn: false

      sass:
        files: 'scss/*.scss'
        tasks: ['sass']
        options:
          spawn: false

Also, make sure to add grunt-contrib-watch in the loadNpmTasks section. If you do this, then you can type grunt watch in one of your shell windows and it will rebuild whenever the appropriate files change. And yeah, it’s a bit unfortunate that you have to specify the file globs twice, but only a bit; if that really bothers you, I guess save those file globs in variables? (We are, after all, writing in a real programming language here.)

 

There’s one further change that

medium: setting things up

May 29th, 2014

As I said recently, I’m experimenting with writing a Netrunner implementation in JavaScript. I’m calling it Medium; here’s the first in a series of posts about issues I’ve encountered along the way.

Before I go too far, I want to thank two sources of information. The first is Bill Lazar; he’s one of my coworkers, and he’s given me lots of useful advice. (And I suspect still more advice that will be useful once the project gets more complicated.) The second is James Shore: just as I was thinking about starting this, he published a list of JavaScript tool and module recommendations that seems very solid.

Anyways: before starting, I’d made a couple of technology decisions, and they were actually to not quite use JavaScript and CSS: both are solid technologies to build on, but both have annoying warts that I don’t think are worth spending time to deal with. So, in both cases, I’m using languages that are thin wrappers around them: instead of JavaScript, I’m using CoffeeScript, so I don’t have to worry about building my own class system or explicitly saving this in a local variable when I’m passing a function around. And instead of CSS, I’m using Sass (or, specifically, SCSS): when writing CSS, you find yourself repeating certain values over and over again, so having a macro layer on top of CSS can really improve your code. Neither of these languages means that you don’t have to understand the language that underpins them, or means that you need to have to learn extra concepts beyond what the base language provides: they just automate some common tasks.

(Incidentally, once my CSS gets more complicated, I’ll probably start using Compass as well. I haven’t yet felt a strong need for that yet, and it’s possible that what I’m doing is simple enough that I won’t actually need Compass, but it seems like the next step once I start feeling that even Sass is too repetitive for me.)

 

This meant that I needed to install those tools. I won’t go into the details of installing Sass: basically, you need Ruby + RubyGems, both of which I already had lying around, and both of which are entirely tangential to this series. (If you’re on a Mac and aren’t already a Ruby developer, then probably sudo gem install sass will do the trick.)

CoffeeScript, though, requires Node.js and npm, both of which I was going to need anyways and neither of which I had detailed experience with, so I’ll talk about them a bit more. On my Mac, I used Homebrew for both of those (if you install Node with Homebrew then npm comes along automatically); on my Linux server, I used the Ubuntu-packaged version of Node, and I installed npm following the standard instructions.

I initially did a global install of the coffee-script npm module. But you really want to control that sort of thing on a per-project level, so you can specify what version of a module you want: and npm lets you control that via a package.json file. There are lots of options that you can put in that file, and I imagine I’ll start using a lot more of them once I use npm to actually package up Medium, but for dependency management you can ignore almost all of the options. So here’s a sample package.json file if you just want to use it for dependency management:

{
  "name": "medium",
  "version": "0.0.0",
  "devDependencies": {
    "coffee-script": "^1.7.1"
  }
}

Try putting that in a package.json file in an empty directory and then typing npm install. You’ll see that it installs coffee-script along with a package mkdirp that coffee-script depends on, and it puts them in a new subdirectory node_modules.

You can look at the docs for the version numbering if you want, but basically: ^1.7.1 means that it’s known to work with version 1.7.1, and later versions are probably okay. This is totally fine while I’m working on something for development; for a serious deployment, I’d probably want to pin things down more tightly, including specifying versions of packages pulled in indirectly.

One nice trick: say that you have new package that you want to start using. Then don’t bother looking up the version number and manually adding it to package.json: instead just do

npm install --save-dev NAME-OF-PACKAGE

That will look up the current version of that package, install it, and add an appropriate line to your package.json file. So that way you can start using the latest and greatest version of your package and get it working, and you’ve saved the information of what that version that worked for you was.

On which note: you of course want to check package.json into version control. For now, I’m putting node_modules in my .gitignore file; if I get to a situation where I’m serious about deployment, then I’ll want to have a way to get access to node_modules without depending on external sources for that, but even in that situation, storing it in the same git repository as the source code is the wrong approach (because of repository bloat). For a personal project just for fun, ignoring node_modules is totally acceptable.

 

So with that in place, I can compile CoffeeScript files by invoking node_modules/coffee-script/bin/coffee. Which is what I did initially, but I got a more formal build system in place fairly soon, I’ll talk about that next.

men, women, programming, culture

May 25th, 2014

So, a couple of weeks ago, a prominent programmer / writer wrote a post whose driving metaphor was: frameworks are bad because it’s like one woman having many men sexually subservient to her, whereas the way things should be is for one man to have many women sexually subservient to him. People complained, he apologized and rewrote it without the metaphor in question.

Last week, another prominent programmer / writer tweeted a picture of some custom artwork he’d commissioned. That artwork showed silhouettes of a woman posing in a sexualized fashion, holding guns as if they were fashion accessories, with those silhouettes serving as shooting range targets. The artist has produced quite a lot of works on that theme, it turns out; his statement says “We are, all of us, Targets in one way or another.”

 

After this last weekend: some of us are a hell of a lot more targets than others of us. As the artist says, “None of us are exempt from exposure to these fixed cultural elements of our existence, or the means by which they attempt to impose their will upon us”, but that imposition takes radically different forms in different circumstances. He says that “[I] ask my audience to interpret each piece for themselves so as not to be hindered or influenced by my intentions”; the interpretation that I’m coming to right now is that men’s conception of gender roles in this society is super fucked up; that manifests itself in many ways, along a continuum of severity; and that I don’t see the software development community as a whole to be particularly at the innocuous end of that continuum.

Another prominent programmer / writer tweeted: “Seems to me we (again) review ideas for political correctness before considering the ideas themselves. I’m not sure that’s good.” Which raises the question: good for what? If your sole objective is to try to become as good a programmer as possible, then focusing exclusively on ideas and ignoring metaphor, subtext, social context may be a good strategy. I’ve frequently been in that situation myself, and I’ve learned quite a lot about programming from all of the programmers mentioned here. (Though if their books had been full of harem metaphors, I’m not nearly as confident that that would have been the case.)

Becoming a better programmer isn’t my only objective these days. There are a lot of problems in this world, a lot of directions along which to try to improve; programming ability is one of those directions, and I still have a huge amount to learn in my struggle to become a better programmer, but there are a lot of other issues that I struggle with, that I have a huge amount to learn about as well. And I think some of those other issues might even be a bit more important.

netrunner implementation experiments

May 22nd, 2014

GDC got me in the mood to do some game-related programming; and, when that mood didn’t go away after a couple of weeks, I started to spend some time thinking about what exactly that would mean. I’d thought initially that maybe I’d learn how to use Unity, trying to implement one or two game-related tech experiments I had in mind. But a lot of my game playing these days is in the form of board or card games, and some of those ideas were starting to pull at me a bit more; Unity’s 2D support has apparently gotten significantly better recently, but when I looked at some of their 2D demos, it was still intended for physics-based games, which isn’t so relevant for most aspects of board games.

And, thinking about it a bit more: I can probably just do a card game or a board game in HTML / CSS / JavaScript. (Not even pulling in the canvas stuff: I’m perfectly happy to represent a card as a div.) Which has huge advantages in terms of experimentation: I can work on it wherever, people can run it wherever, and it’s a super-easy way for me to get going.

It does mean that I won’t learn Unity, which is too bad. But the flip side is that I can use this project to catch up to speed with a lot of other technologies: it’s been over three years since I’ve seriously programmed in JavaScript, and that code base was out of date and badly-written even at the time. So this could be an excuse to learn about CSS3, to learn about more of the JavaScript ecosystem (which is continuing to grow like crazy).

Also, while I’ll start out with an implementation just in the browser, I’ll want to add a server-side component fairly soon on. And I can do that in JavaScript, too: if I use Node.js then I can move my business logic code from client to server side, or use the same code in both places as appropriate. (Thinking about that will also give me a good excuse to separate business logic from presentation, which is always a plus.) I’ve never used Node but it’s certainly in the list of technologies that I’m interested in.

And there’s a subtext of this that isn’t game-related: I imagine I’ll be at my current job for another year or so, but at some point I’m going to want to move on, so it’s not a bad idea right now to start thinking about ways to increase my options for a potential move. And brushing up on modern web technologies and learning about Node fit that bill quite well: I’ve worked as a backend developer in most of my jobs, but my guess is that I’d be happier in a group with more fluid roles, which means that brushing up my frontend skills wouldn’t be a bad idea, and I can also certainly imagine working professionally with Node in the future. Also, just building a full project from scratch is always educational.

 

So: the plan is to write a board game or card game using non-canvas JavaScript in the browser, with Node as an eventual backend. But that leaves out one very important aspect of this: figuring out what the game will actually be. If I had lots of card game ideas written down, I’d probably pick one of them; as is, though, I don’t, and I suspect that I’ll spend enough time playing with technologies, at least initially, that I won’t want to spend a lot of time on game design ideas.

So that, in turn, suggests reimplementing a game somebody else has written as an exercise. Yes, I’m quite aware of the problems around cloning, but that’s not an argument against doing something as a private experiment. (Think of this like an art student making copies of works in a museum.) And, when I phrase the question that way, an obvious candidate comes to mind: Netrunner. The game’s rules are more than complex enough to teach me a lot about the tradeoffs in the domain implementation side, it raises a lot of interesting questions about interaction models, and the only current electronic implementation that I’m aware of is one that I won’t be tempted to copy the details of. So it seems like a good place to start; I’m pretty sure that, once I’ve gotten a basic implementation of the game working (one identity on each side from core set cards, say), I’ll have learned a lot and will be able to take that learning in a lot of different directions.

What I’m not at all sure is how long this will take: it depends on how much time I carve out for it, it depends on how much I need to learn, and of course the Netrunner rules have a lot of special cases, even in the core set. I wouldn’t even be blogging about it at all right now, except that I’ve already learned a lot from the experiment: I’ve probably missed four or five good blog posts by not blogging about it from the start. I’ll try to recreate some of those, but still, not the same.

Netrunner initial placement experiments

For reference, here’s where I was earlier today (along with a corresponding view from the Corp side); I’ve been thinking about installation models and how to fit stuff on a not-excessively large screen. (Yay CSS transforms for resizing and for rotating Corp ice!) Once I get a little farther with installs, I guess I’ll try working on basic runs; that’ll be interesting…

And if anybody is designing a card or board game that you’d like a browser-based version of, let me know: hopefully in a few months I’ll have come to a reasonable stopping place on this experiment and I’ll be interested in using these technologies for something else.

system shock 2

May 14th, 2014

I’m planning to play through all the games in both of the Shock series this year; I had a quite good time replaying System Shock, but I’d never played System Shock 2, which seems to get talked about rather more. (E.g. I’ve seen comments claiming that BioShock is in many ways an inferior remake of System Shock 2.) So I was really looking forward to playing it; of course, I didn’t expect it to be as smooth an experience as BioShock, given its age, but I did fine with System Shock, which is even older.

As it turns out, I most emphatically did not do fine with System Shock 2. Not that I regret having given it a try, but I’m glad I gave up after going through the first two levels: it simply wasn’t working for me. Which is too bad, because it meant that I didn’t get to really experience the SS2 version of Shodan, or the lure of The Many, but trying to finish it would have driven me crazy.

 

I didn’t realize quite how much of a kitchen sink game System Shock 2 is: it’s got significantly more going on than either its predecessor or successor. There’s a skill tree that’s initially presented as a class system but where you quickly learn that you can cross classes; there’s a psi system; weapons degrade; inventory turns out to be even more pressured than its predecessor but with a (hidden to me until I stumbled across it in a FAQ, though maybe I missed something) way to expand it slightly by leveling up; there’s this chemical thing for unlocking buffs; and probably more variables that I missed completely. And all of that is on top of its predecessor’s FPS-combined-with-role-playing-inventory gameplay and its story told through environment, audio logs, and orders through loudspeakers. (With hallucinations added into the mix this time!)

So way too much stuff to be a focused game. Which is fine: I wouldn’t want all games to be that way, but I’m all for art that turns an ungainly collection of concepts into something unexpectedly magnificent. The thing is, though, I need to be able to actually play it without driving myself crazy.

 

I started off on easy (as I do in games like this), and I selected the psi path. I figured I’d be able to freeze enemies with the power of my mind, and I’d be able to whack them to death with a lead pipe. And, indeed, the lead pipe was there, as expected; what wasn’t expected was that the lead pipe was much less effective than in either System Shock or BioShock. That might not be a big deal, since I could freeze my enemies, except that freezing enemies took up psi power which didn’t autorenew and whose ammo is a more limited resource than ammo for standard weapons. And, when I was encountering enemies at the start, I couldn’t (if I’m remembering correctly) even fire standard weapons, because I would have needed to spend some experience at the start to level that up, and I’d spent the experience on other stuff.

So, basically, it felt like I was being set up for failure right from the beginning by making what seemed to me (what still seems to me in the abstract) to have been a perfectly plausible set of choices in my initial powers. Maybe I’m missing something there; certainly if I were better at playing FPSes on PCs then I would be better at dancing around enemies. (Though I get the feeling that the controls in this game are a lot clunkier than in normal FPSes; I missed when swinging with the pipe a lot more than I’m used to.)

Having said that: this being a Shock game, dying wasn’t actually so bad. There were vita chambers to revive you, or saving and loading was fast enough, too. So I was optimistic that I’d start enjoying it more as I made it through the first deck: I leveled up so I could shoot guns, and it really wasn’t that annoying by the end. I wasn’t actually enjoying it too much, and I was actively offput by having to shoot squeaking monkeys, but still: serviceable enough, I felt like I was starting to get control of the game a bit and get past my loss aversion.

 

And then the next level started off by putting me in a radiation area: no getting comfortable here, and not just being uncomfortable because of narrative and general spookiness, uncomfortable instead because I’m going to feel like I’m always about to die even if I’m playing at easy. But it wasn’t too long before I unlocked the next vita-chamber, so I could relax again.

Except I couldn’t. One big difference from its predecessor is that System Shock 2 splits each deck into multiple sections, and vita chambers in one section don’t work in another. So I ended up having to go through a part with a new, significantly tougher robot enemy and where I couldn’t freely respawn. This meant that, instead of a grind of running through levels, killing some stuff, dying, and getting revived, and making a bit more progress (though not as much as I would like because some enemies respawned as well), I instead was reloading save games all the time and looking on nervously as what seemed like a very generous number of health packs disappeared surprisingly quickly.

I made it through that deck, started the next one, and decided that I just didn’t want to deal with the game any more. So I stopped.

 

Not what I wanted out of a game. There’s probably interesting narrative there, but it wasn’t letting me get to it. There’s probably interesting systems there, too, but that wasn’t what I was in the mood for, and the game wasn’t structured in a way to let me play with those systems. (Our May VGHVI Symposium was FTL: I died all the time in that one, too, but that game was set up to let me learn the systems by running another experiment every hour, so I never had the frustration of feeling that my initial build had set me up for failure, or of wanting to reload because otherwise I wasn’t sure if I’d get to the next bit of narrative.)

On to BioShock next. Maybe I’ll try that one on normal instead of easy: there is something that I would enjoy in the systems of these games, and that game showed that it understood what I was asking for when I did play in easy, so maybe it would also be more understanding if I express willingness to grapple with those systems? We’ll see…

blank screen starting octgn in wine

May 4th, 2014

I set up OCTGN on Wine on a new computer in preparation for this week’s VGHVI session; I was following these helpful instructions, which have worked for me in the past.

Unfortunately, I ran into a weird problem: OCTGN would start with its normal “Loading OCTGN” screen, but then instead of showing me the normal game window when that was done, it would show me a black rectangle.

I tried it out on the other machine when I’d previously had OCTGN installed that way, and I got pretty much the same symptoms. Though on that machine, it took longer, since it spent some time updating OCTGN, and there was a popup that briefly showed up that gave me a clue as to what was going on.

So, the short version: if this happens to you, edit the OCTGN settings (probably in ~/OCTGN/Config/settings.json) to set the property IgnoreSSLCertificates to true. Here’s a line to add:

"IgnoreSSLCertificates": true,

(or, if you put it at the bottom, then put the comma at the end of the previous line instead of that line).

Once I did that, OCTGN came up as expected; I haven’t actually tried playing a game, but I’m assuming that that works. (Though I brought my VirtualBox Windows installation up to date just in case…) But I figured I might as well write a post about it, in case it helps anybody else googling for solutions to that problem.

whales

May 1st, 2014

Last time, I talked about free to play; a phrase I often hear linked with the term “whale”. The prototypical use goes something like this: free-to-play games make most of their money from a small proportion of whales, people who spend thousands of dollars that they can’t afford in order to buy useless items in those games because they’ve gotten addicted to the game through techniques borrowed from the gambling industry.

And yeah, there’s some amount of truth to that. I’ve worked on a game that got some portion of its revenue from items that were intentionally priced in a fashion that makes it hard for people to calculate the price of what they’re trying to buy, and that did so using sporadic reinforcement techniques. That is not a good thing to do; I don’t know if the company I worked at intentionally borrowed those techniques from gambling techniques, but it wouldn’t surprise me if there was some sort of such borrowing in the lineage somewhere.

But that amount of truth is linked to a lot of other assumptions that I do not agree with, and that I think are misleading and harmful to the discussion.

 

I think the term “whale” comes with a lot of baggage, so here’s my attempt to define it in a way that avoids that baggage: “whales” are people who spend a disproportionately large proportion of the total money on a product, enough to significantly skew the entire profit structure for that industry away from they average buyer. I’m intentionally not mentioning free to play, gambling, or addiction: whales are simply your best customers, whatever that means.

And I cannot see why the existence of whales is a bad idea. In order to minimize free-to-play game associations, let’s move away from games entirely. There are many people in this country who have played violin at some point in their lives (usually, I suspect, in school orchestras); my daughter is one of them. (As am I, for that matter!) She started taking violin lessons several years ago; she’s one of the better players in her school orchestra, in the top 10% as measured by her seating position.

Lots of people who have played violin have had that experience only through school: their lessons are in the school orchestra, they borrow instruments from the school to play on. (You might call those violinists free-to-play violinists, were you so inclined; I don’t think I ever took violin lessons outside of school, and I played on a hand-me-down violin, so I was one myself.) But many people pay for lessons; we paid around $200/month for our daughter’s lesson, I’m sure there are many teachers who teach for cheaper (especially in places other than the Bay Area), and there are also ways to spend a lot more money on your learning. (Tuition at Juilliard is currently over $36,000 / year, and there are also summer camps where younger students can spend thousands of bucks at a pop.)

As for the instrument, you might start out renting a cheap violin while you’re getting started or buy a violin that costs, say, $200. As you get more interested, you’ll get to where you can appreciate and benefit from better instruments; our daughter’s latest violin cost around $1000, for example. And, having spent some time in violin stores, I can assure you that they would be happy to sell us instruments that cost us tens of thousands of dollars, and professionals might consider spending hundreds of thousands of dollars on an instrument.

 

Who are the whales here? I don’t know for sure if that term applies at all, because I don’t know what the profit structure of the violin industry is like; I don’t even know exactly how to measure profit in a context of individual teachers giving private lessons. (It’s easier to measure it for violin stores, I’m fairly sure, but there too I have no idea what the numbers are.) But there are certainly lots of people who play violin who haven’t spent money on it, or people who have spent a little money on an instrument but haven’t taken years of private lessons.

So I don’t know if that makes my family whales, or if it makes sense to reserve that term for, say, students who end up getting into elite music schools, or even if the price structure is such that the term doesn’t apply. But that’s the lens I’m looking through when I read posts about whales that it’s immoral to allow players to spend thousands of dollars on their game: I really don’t think the violin industry is being immoral by letting us spend thousands of dollars on it.

And, as somebody who loves video games: shouldn’t they be worthy of spending thousands of dollars on? Of course, many of us do that in aggregate: there are people out there who are happy to spend that much money on a gaming PC, or to spend a thousand bucks a year buy buying a couple of $50 games a month. But shouldn’t we seek out individual games that are worthy of that sort of love? (Which I have, I’ve certainly spent thousands of bucks on go in various ways…)

 

Like I said above: there’s some truth in the complaints about whales. There’s something dirty about intermittent reinforcement combined with obscured pricing, for example. Though maybe there’s something dirty about intermittent reinforcement in general, I’m not sure random item drops are great even if you’re not spending money on them.

And, while I didn’t call it out above, another part of that complaint is about pay to win. This is certainly something I try to actively avoid: I’m sure Magic is a great game, but the last thing I want to do is spend money trying to get good cards. (Which is of course in part about the intermittent rewards.)

But the violin analogy makes me see pay to win in a good light. Some violins are better than others, and that quality has a noticeable correlation with price. And that’s a good thing: if we could magically duplicate the best violins, then that would probably be a better world, but we’re not in that world; in the absence of that, I’m glad that capitalism gives incentives for people to make better violins.

Moving from non-digital art to non-digital games, not all bicycles or sets of golf clubs are made equal, either. Here, though, the rationale for a quality race isn’t as clear: it doesn’t necessarily improve golf as a sport to let players hit the ball farther and farther simply by spending more money. But it also wouldn’t improve golf if pros had to play with a $200 set of clubs. So it’s probably best for the sport to have a quality cap imposed but to have the cap be generous enough to let the best players show as much of their art as possible.

Or then there’s go as an example: you can play go as well on a $20 go board as on a $50,000 one. So there quality is a purely aesthetic question: I’m glad that aesthetic range exists.

Digital games are quite different in this respect, though: in a digital golf game, the best set of clubs and the worst set of clubs cost the same to produce. So yeah, paying more money for a simple stat upgrade isn’t good: it makes the game ecosystem worse. But it misses something important about the golf example: having access to more flexible tools lets better players express more of their art. So I’m a fan of systems like League of Legends or Netrunner where the game developers provide a basic ground level experience to everybody but also go out of their way to expand the possibility space in interesting ways and let you have access to that by paying for it. (And paying for it in a predictable way, unlike Magic.) As a game player, I want games to be as rich as possible; paying money to have access to more options within that space can be a very good tradeoff when executed well.

And hats or skins are the analogue of fancy go boards: that’s not pay to win at all, and I think their existence is great. Or at least great when it’s not linked with random item drops.

 

Pay to win points in another direction, though: paying to win is spending money to trade an intentionally worse game experience for one that is, at least superficially, intentionally better. And we see this in single-player games as well: it’s the energy gates in Facebook games that you can get past by either waiting, spending money or spamming your friends.

Games asking you to spam your friends is bad. I’m actually fine with the choice between waiting or paying to advance, within limits: that’s a form of paying for value. Still, as a player it’s certainly nice to pay a flat fee for a game and be able to explore it all I want.

But there are also a lot of fixed price games that still have gates! It’s grinding, it’s narrative games that throw wave after wave of enemies at you before letting you go through the story. And while I don’t think having games let you pay $10 to skip the combat in your favorite RPG is the best solution, not having the option to minimize or avoid the combat also isn’t great! I’m not advocating for pay to win in those contexts, but the questions that it raises are important ones: it at least asks what it would mean for a game to be responsive to different players’ different desires.

 

So yes: I don’t want to spend time in an ecosystem where pay to win is artificial scarcity that actively harms the structure of the game. But not all pay to win is that: if “pay to win” is either a difference that expands the possibility space for better players or that gives better aesthetics, than I like it a lot more. And what I like the most is a focus on the quality of the experience: that’s something that I want all games to care about, no matter their pricing structure.

threes!

April 20th, 2014

Threes! is both adorable and, I suspect, pretty good. A similar sort of combining mechanism to Triple Town, but with shorter games that fit into my day better, and a bit less aggressive randomness. (I gather a sign of being good at Triple Town is enjoying the bears, finding that you get a lot of money out of them; I’m not there yet.) I’m fairly sure there are still several layers of strategy/tactics that I haven’t yet uncovered, though it’s a little hard to say from where I’m sitting.

Not much to say beyond that. It’s, uh, a metaphor for code hygiene? I wish the loading times weren’t so glacial? The art / music / sounds / motion really is adorable? (Except that the last upgrade seems to have broken sounds / music for me, at least some of the time.) It’s good enough at sucking up time that I should probably move it off of my home screen?

free to play

April 10th, 2014

There was a fair amount of discussion of “free to play” at this year’s GDC; most of it negative (at least in the discussions I was part of), often extremely so, and often linked with the concept of “whales”. There’s some amount of that discussion that I agree with, but more of that discussion (and the moral judgments that come with that discussion) that I’m uncomfortable with, so here’s an attempt to tease out what I think.

One basic point of uncertainty I have is what people mean by the term “free to play”. For example, at some point I was talking with Jorge about The Walking Dead; you can play the first episode of each season for free, so does that make it free to play? On a straightforward reading of the term, I would argue that it does, but within the cultural context of the discussion of GDC, I think it doesn’t. Or at least that’s not the type of game that the GDC zeitgeist wants you to envision when you bring up the term: it wants you instead to think of games like Candy Crush. (Or League of Legends, which the zeitgeist likes rather more than Candy Crush.) Is there a way of thinking about the concept that illuminates those differences?

The term “free to play” strongly suggests that we should talk about pricing models in general. So, in hopes that that sheds some light on what the term might mean, here are some possible models you can use to think about how to set “the right” price for something:

  1. Price based on cost: set the price based on the costs that go into developing / maintaining the game, plus enough of a profit margin to get by.
  2. Price based on value: set the price based on how much value the purchaser of the game will get out of it.
  3. Price based on marginal cost: set the price based on the cost it takes to produce / maintain one extra copy of the game.
  4. Price based on misdirection: get as much money as you can from players, without concern for the players or the long-term health of your relationship with the players.

It seems to me that a lot of the discussion presumes that free-to-play games always fall into the fourth model: the assumption is that providing games for free is inevitably the first step in a misdirection play. It also seems to me that the third model is a fairly major player in the discussion; and it’s an even larger player in the (somewhat related) discussion around game cloning, because cloning is closely tied to decreasing the marginal cost for producing a game. And pricing based on marginal cost combined with a digital environment is really scary for GDC attendees: these are people whose livelihood depends on making games, so their jobs will vanish if price = marginal cost = $0.

 

The thing about that third model is: in a lot of contexts, it’s the most natural way to price products. If a product is a commodity, then multiple companies offer functionally equivalent versions of that product. And so people looking to buy that product will pick the one that is cheapest; so companies will struggle to offer that cheapest price, which gives them an incentive to push the price down as low as possible while it still being worthwhile to sell the product at all. In a physical goods context, what this frequently means is trying to lower your cost of production, leading to a pursuit of economies of scale and other production efficiencies; that’s brutal enough, but it’s even more brutal in a world of electronic distribution, where the marginal cost is a fraction of a penny.

But, as much as it sucks to be a game developer in that position, there’s nothing inherently immoral about that situation. As a consumer, I am very glad that most of the items that I purchase are commodities: that when I walk into a grocery store, I don’t have to worry about the exact value to me of a can of tomatoes or the exact cost of production for that item. Instead, I get a lot of benefit from the fact that there are a bunch of companies out there trying to win the commodity sales war, by finding more and more efficient ways to produce tomatoes to sell them to me for cheaper and cheaper amounts of money.

Don’t get me wrong: I realize that this commodity war has real human costs as well. So in particular, I support measures like minimum wage laws and environmental protections that lower those human costs (especially if they make them explicit to encourage competition in lowering them, e.g. carbon taxes), even though those measures may have the effect of increasing the marginal costs for all the producers of the goods and hence to me. But I’m also really glad that I live in a world where most of what I need for my daily life is a commodity: it raises standards of living enormously.

 

This doesn’t mean that I support cloning in general: I suspect that that is one of those areas where artificially putting a floor on marginal costs is useful. For example, I’m not the biggest fan of copyright laws in the world, but if pressed I’ll admit that giving protection from copying an entire piece of software for a handful of years is as good an idea as I can think of. And I would never argue against anybody who has enough pride in their craft to be unwilling to clone. But I also think that some amount of cloning is extremely healthy: there are a lot of first-person shooters out there, there are a lot of match three games out there, and while those all look like clones from a distance, I’m glad that there’s enough room in the design space to allow Bejeweled, 10000000, Puzzle Quest, and Triple Town to all coexist.

So for me, the best solution to cloning is: find ways not to be a commodity. Which I realize is trite, even insultingly flippant, but I don’t have any other suggestions to offer that work with economics as I understand it. And this solution works in non-electronic contexts, too: sometimes, I just want a random can of tomatoes, but sometimes I want something that will taste noticeably better or work better in some particularly culinary context, which sets up the possibility of getting out of the commodity space. Or at least sets up the possibility of market differentiation: there’s still going to be some amount of commoditization within each market segment, but if you can find a small enough segment to work in, commodity effects will noticeably decrease.

Also, before I leave the topic of pricing based on marginal cost, I want to link it to the fourth pricing model: because pricing based on marginal cost often turns out to mean having the perceived price be based on the marginal cost, when the actual cost can be higher. In the electronic goods situation, this means that something is labeled as free but has hidden costs in terms of time, in terms of advertising, in terms of monitoring. Whereas for physical goods, two cans of tomatoes may have the same sticker price on the shelf, but one of them may have higher costs in terms of damage to the environment when producing it, damage to workers’ livelihoods while producing it, damage to your physical health from consuming it. So yes, pricing based on marginal costs can be linked to pricing based on misdirection; of course, any of the other pricing models can also be linked with pricing based on misdirection, but if we’re talking about free to play, it’s hard to imagine how a producer motivated by profit (as opposed to, say, one motivated by sharing) will function effectively in a zero-marginal-cost commodity context without some amount of misdirection in pricing.

 

At lot of the people I see advocating against free to play are advocating for the first model: they want a world where buyers pay thirty or sixty or whatever bucks for a game, where sales of quality games within a genre aren’t crazy to predict, and where you can staff dev teams accordingly. And I can certainly see why most of the people at GDC would like that first model: they know they’re not likely to get rich off of a game, but they want to make a decent living off of their work.

There are two problems with this model, though. For one thing, speaking as a person who plays games: why should I pay $60 for a game if I don’t know if I’m going to like it? I’ll even ask why I should pay any money for a game if I don’t know if I’m going to like it, but if it’s cheap enough, I can’t say that I shouldn’t pay a few bucks out of curiosity; but I’m a lot more dubious at, say, traditional console game prices.

But, more importantly: according to my admittedly naive understanding of economics, this model simply doesn’t fit the real world. There’s no reason why the amount that people are willing to spend on an item should be directly tied to the cost of the item: if your competitor is willing to sell a comparable item for less than your cost to make it, then tough. Fortunately, that can cut both ways: if you can either increase the item’s value in a unique way or decrease your production costs in a unique way, then your profit selling the item can increase out of proportion to its costs! But, either way, it’s not an accurate way to think about the world.

And it is my feeling that this model also comes with a fair amount of misdirection. In a world of fixed non-trivial priced games, players need ways to decide whether to buy a game without playing it. This leads to large amounts of advertising, it leads to a games “press” that’s almost entirely about getting players excited about upcoming games to the benefit of publishers, it leads to attempts to constrain the number of games that are being talked about / actively played in a given time period (that’s part of fighting against commoditization), it leads to games that wear out their welcome so players are encouraged to move on to buying something else, it leads to design based on marketing bullet points rather than lasting value.

 

And speaking of lasting value, let’s talk about the second model: pricing based on value. This is my favorite model: when I’m buying a product, I’m happy to spend money if I feel that I’m getting something for that money (at least if my budget is doing well!); if I’m selling a product, I feel great if I’m getting rewarded for making the product more valuable. And, unlike the first model, this model actually does work: as long as the product you’re selling isn’t a commodity, then you absolutely can price it for significantly more than the marginal cost if your target market thinks it’s worth it. (Witness Apple’s success in keeping huge margins for its products, or Nintendo’s ability to create a single version of Mario Kart for a given console and sell it at a relatively high price for years.)

I mentioned The Walking Dead above; in my mind, it’s a great example of this model. Before you’ve started playing the game, it hasn’t proven its value, so they let you play the first episode for free. If you’re still not sure, then you can buy the episodes one at a time, dropping off whenever you decide the game isn’t worth it. If the first episode convinces you that the whole season is valuable enough to pay for, then the developers let you show your appreciation of its value by paying for the rest of the season sight unseen at a slight discount.

Android: Netrunner, my current obsession, is another example. It is admittedly not free to play, so it requires a leap of faith from the player at the start. But once you’ve decided to play that, the developers will continue to attempt to provide value by producing expansion packs, and it’s up to players to decide whether those are valuable enough to purchase. I basically think of the game as one with a $15/month subscription fee; and in my mind, it’s absolutely worth it. I don’t play League of Legends, but my understanding is that it’s got a similar dynamic, albeit one more tilted toward the player: you can play a huge amount for free, but once you get sucked in, there are many ways to pay money, to let you get more value out of the game (by letting you focus on a champion you like, to get a skin that you enjoy looking at or feel represents yourself better). Or, for that matter, to pay money just because it feels right to give money to a company that has given you hundreds of hours of value: whenever a game sets up that dynamic, it’s really doing things right.

 

I’m already a couple thousand words in, so I think I’ll defer my discussion of whales to another post. I guess my conclusion so far is:

  • Pricing based on value can work, and when it works, it’s great for both players and developers, creating games that are worth playing for years.
  • Pricing based on misdirection sucks: try to avoid doing that. (It’s the one aspect of my work at Playdom that I actively felt bad about: I felt that a lot of our pricing was just fine, but we had this concept called “crates” that’s based around people’s brains not being wired to understand probability.)
  • I don’t see how high fixed pricing works in a digital world without strict gatekeepers: otherwise, it gets swamped by commoditization forces.
  • Commoditization forces are scary for developers, and they even scare me somewhat as a player.

what are apple’s language plans?

April 3rd, 2014

I spent my commute home today listening to John Siracusa and Guy English talk about how Objective C is getting old in the tooth. A topic, of course, that Siracusa has addressed a few times; as you would expect, it was a thoughtful discussion, I’m glad I listened to it.

And I really am curious about the answer there. I’m no Objective C expert, but it seems like it’s going to be an issue at some point in the not horribly distant future. But it seems like a lot of the standard solutions that I’m used to would be potentially problematic from the way Apple seems from the outside to approach things: in particular, a solution that bakes in a VM with full garbage collection into its foundations seems a little unlikely to me? I’m not aware of existing new languages that feel to me like they’d be a great fit for Apple, but that doesn’t mean much, I haven’t kept up well with modern trends in that area. Which could mean that Apple will do an Apple-style thing and invent their own solution; the question then becomes whether they have the language design chops to do an Apple-quality job of that. (Which I’m not at all convinced of.)

I dunno. The next time I’m on the job market, I should see if I have contacts over there who could hook me up with people who might be working on that. It could be a very interesting way to spend time, and one in which my own particular skills could find a useful role?

the wind rises

March 23rd, 2014

The Wind Rises is, I suspect, a very good movie; I won’t end up loving it in the same way as Spirited Away, but I probably will end up loving it more than Miyazaki’s films since that one, and the fact that it takes a less fantastical approach to its subject matter of course comes with strengths. I don’t have much to say about it as a whole yet, though I quite liked Ghibli Blog’s take on the movie.

What I do want to talk about, though, is one aspect of the surrounding discussion. The review that showed up in my local paper ends by saying that “But not addressing the way it was used and the war the country started so that it could use it just reminds us that Japan still dreams of denial, as far as World War II is concerned.” Or, on the blogger side, we have Tim Bray saying “But yeah, there’s a prob­lem. What we have here is art that’s all about glo­ri­fy­ing and ro­man­ti­ciz­ing peo­ple who built killing ma­chines that were put to use by a fas­cist gov­ern­men­t.” So: should I consider the movie’s treatment of that issue a problem or not?

Having just finished rereading the Nausicaä manga, I’m inclined to give Miyazaki the benefit of the doubt. In fact, I’ll tentatively propose that the movie’s refusal to directly address that question is an active strength: it made it a lot harder for me to pat myself on my back and say that I’m one of the good guys, unlike that horrible person in the movie.

 

Any discussion of the morality of that situation has to step away from the details. Yes, of course Japan did horrible things in the years leading up to World War II; yes, the Zero fighter was built in service of those horrible things. So it’s easy for me to say that a Japanese filmmaker should abase himself in shame for what his country did, that a Japanese airplane designer should have had the courage to say no when asked to build tools of war. But it’s easy for me to say that because I’m sitting in comfort seventy-five years after the fact in my home country, the country that was on the winning side of the war. Would I make the same claim if the roles were reversed, if it were me or my country we were talking about?

Because, to be clear: it is not at all difficult to find parallels in actions that my country has taken. Over those intervening seventy-five years, we’ve invaded one country after another, overthrown governments we don’t like and installed puppet regimes to do our bidding in flagrant disregard of basic notions of democracy, of the rights, desires, and even lives of the people who actually live in those countries. I’m not a student of history, but it is not at all obvious to me that Japan’s treatment of China was any worse than the United States’ treatment of Vietnam or our treatment of Latin American countries. And we’re the country that developed and used the atomic bomb, and we continue to have enough bombs in our arsenal to destroy humanity.

And of course there are plenty of American movies about the Vietnam War that don’t present our actions there as heroic. But what’s interesting to me about The Wind Rises is the oblique angle that it takes to the war. Jiro isn’t a soldier: he’s an engineer and designer, he’s doing work that he loves and is brilliant at, he’s doing work that is unquestionably deserving of love. And he’s doing that work in the service of his country, in a context where many around him are unable to find work and living in poverty. (The movie’s title reinforces that latter aspect of the situation: the wind is rising, we must try to live.) I would like to be able to say that, were our situations reversed, I would make different choices from Jiro, but I don’t believe that: the choices that I’ve made in my life so far give ample evidence that I don’t stay away from work in contexts that are morally questionable, especially if that work is work that I love, am good at, and can make money from.

 

As is obvious from the above, I think a lot of what the US military does is evil. So you would think that I would stay away from the military. These days I do, but I haven’t always. The summer after my freshman year at college, I worked at a defense contractor on a military-funded research project. (We were building a verified Scheme compiler, it was really interesting!) And most of my grad school was funded by a Defense Department grant. In both cases, I got to do something that was fascinating, profitable, and that I was good at; that combined with the lack of direct focus on military applications was enough for me to ignore my misgivings about military ties.

And maybe that was even a perfectly reasonable choice from an ethical point of view: better for the military to spend money on me than on something more closely tied to killing people? Certainly military-funded research has led to a lot of good: the internet started off as a DoD-funded project, after all. But I think that’s largely after-the-fact rationalization of me doing what was most pleasant for me. It’s nothing compared to the gravity of the choices Jiro had to make: if he wanted to do what he loved at all, he had to accept military work, and he was surrounded by people whose lives were threatened by not being able to work. Whereas if I hadn’t taken DoD funding for grad school, then I would have gotten NSF funding instead, I would have had the exact same education, and my stipend would have been maybe 15% lower; this hardly compares in terms of hardship.

I don’t want to present this as too much of a slippery slope: I’m pretty sure I would have thought a lot harder about those choices if they’d involved working directly on military work. But that in turn points at one of the major strengths (?) of modern capitalism: it leads to systems that are very good at finding people’s moral limits and getting as much benefit from those people as possible given those limits. If you want to be on the front lines of fighting evil in the name of your country, the military will be happy to give you a gun and ask you to do that. If you support the cause but don’t want to be so directly exposed (whether for reasons of danger or of not wanting to be confronted with the consequences of your actions quite so directly), you can help at a distance: you can pilot a drone, you can work in a support role. If you want to be ready to support your government if necessary but would prefer to not have your home and work life disrupted excessively otherwise, you can join the National Guard. If you want to use your brain to fight people, or just want to use your brain to solve interesting problems and don’t really care where those problems come from, the NSA will be happy to employ you. If you want to work on generally applicable technological problems and don’t particularly care who pays the bills, then you’ll end up where I ended up, opportunistically getting DoD funding to do what you want.

And what this leads to is a system where the military can get a lot more power, can be a lot more effective than it would be if people had to make a choice up front as to whether or not they’d be willing to pull a trigger and kill a person standing in front of them: by putting their fingers on the scales, the military can weight the system to flow in their direction. It’s similar to the way funding by the rich and corporations biases the political system: I’m sure most politicians would recoil at the notion of simply letting their votes be bought, but if pro-corporate candidates get more funding than anti-corporate candidates, then the whole system flows in a pro-corporate direction even if no candidate’s behavior is changed by the presence of funding, because the pro-corporate candidates are more likely to survive. (And I bet that an awful lot of political candidates’ willingness to engage in quid-pro-quo behavior rises as they’ve been within the system longer, too.)

 

I haven’t, as far as I know, accepted military funding for a decade and a half now; that doesn’t mean that I’m not still implicated in ethical choices, though. Every week, there’s another story about how the tech industry is actively hostile to women, to minorities. Or if it’s not that story, then it’s a story about privacy, how we’re constantly monitoring our users in order to make them more attractive to advertisers. And I just got back from GDC, my yearly exposure to the arguments around monetizing. I’ve seen all of those arguments from the inside of companies; generally I’ve ended up working in a way that puts me on the wrong side of them, because I end up on the wrong side in a way that I only find mildly distasteful, and working on something interesting and profitable turns out to matter more to me than mild ethical discomfort in practice.

And then there are other arguments that don’t even rise to my conscious attention but that are perhaps even more important. Climate change seems like it’s probably a bigger threat to human existence than anything other than nuclear weapons; as somebody who works on server software, I’m part of a switch from physical goods to goods over the internet. So I’m pretty sure that the work that I do is directly relevant to climate change, but I have no idea whether it’s relevant in a good way or a bad way! Maybe reducing transportation costs means that it’s a net positive; maybe server energy usage means that it’s a net negative. But, either way, it’s very easy for me to not think about the issue at all.

 

Miyazaki cares a lot about these sorts of big questions around war, around the environment, around survival: see Nausicaä, see Castle in the Sky, see Princess Mononoke. In those three movies, there’s a clear bad guy to fight against, and it’s easy to put ourselves in the place of somebody fighting against that bad guy. With The Wind Rises, he raises those same questions (referring to them even in the movie’s title), but instead encourages us to empathize with somebody on the other side of that divide. That’s gotten me thinking a lot more than any of his previous movies have; and it has me realizing that it’s not a divide at all.

glamourist histories

March 19th, 2014

I’ve been going through back issues of Asimov’s on my train ride, and a story by Mary Robinette Kowal caught my eye (Kiss Me Twice (PDF), I suspect?), so I figured I’d give her novels a try. So I started with Shades of Milk and Honey; I’m not familiar enough with relevant genres to be able to site it particularly well, but Jane Austen with some magic thrown in? And magic centered around creation of visual illusion set-pieces, which now that I think about it is interesting of itself: this isn’t magic as power fantasy, this is magic as an art form. At any rate, I enjoyed it, Liesl enjoyed it, so we decided to seek out the rest of the series. (The series goes under the name Glamourist Histories, which I find charming.)

And I’m glad we did, because the second and third volumes did something that I am not used to seeing in books, and that I am grateful to see. Maybe it’s the narrowness of my reading, but: I am used to reading books that don’t talk about (romantic) relationships at all. I am used to books that present a relationship as a perfunctory prize won by the hero as his (because it’s almost always a him in these situations) “natural” right. I am used to books that flesh out relationships in the courtship phase, though I don’t read as many of those as I could. And, back in the days when I read more literary fiction, I wasn’t surprised to run into books about marriages that were falling apart, from a male midlife crisis point of view.

The thing is, none of those are particularly relevant to me. I was going to say that the first is, in that there are very important parts of my life that don’t focus on my marriage; but those parts of my life still very much have to acknowledge the fact that my marriage (and my family in general, I don’t want to exclude our daughter!) is a key influence on how I spend my time. And of course I have gone through courtships.

But: Liesl and I started dating more than half of my life ago; we’ve been married for over fifteen years, and we were pretty solidly committed to each other for the last four or so years that we were dating before we got married. And, while there’s no guarantees about any of this, my guess is that my life is only about halfway over, and that we’ll be married (to each other!) for that remaining lifespan as well. So why is it so rare for me to read books that talk about happily married life? I won’t say that it’s unknown in my reading: some of Delany’s recent books present married life, in particular his latest. And the Kushiel series addresses the topic as well, though really only the third book focuses on it. Still: such books are rare, and I’m finding that rarity frustrating.

 

I’m not a novelist, of course, and I’m sure there are positive reasons for those choices: happy marriages don’t have the sort of external drama that lends itself to novelization. But I was really glad to see that, after starting the series with a novel that led up to the protagonist getting married, Kowal continued the series in a way that presented that marriage as being central without making the drama be about whether the marriage will succeed. Not that the marriage doesn’t spark off bits of conflict, especially in the second volume: Jane and David are still getting to know each other, and so while they’re remarkably well matched, there are still points of tension, areas where they’re figuring out each other and figuring out their marriage. But they handle this like grownups, in an entirely realistic way: sometimes things don’t go perfectly, sometimes Jane and David don’t react perfectly, but in general they talk things out and do so from a position of love and respect. Yes, not all marriages work this way; but I can say from experience that some certainly do, and it’s a model that I far prefer to a model that only presents marriage as a prize at the end of courtship or as a source of drama as it falls apart.

The other aspect of the series’s treatment of marriage that I appreciated: its portrayal of lust. I’m used to books that present sex between people who are marked as young and attractive (especially if female) and/or heroic (especially if male). These novels, however, explicitly mark Jane as not particularly conventionally attractive and as getting married later than is normal. But the novels are also quite forthright in presenting that as irrelevant: Jane and Vincent love, admire, and lust after each other. Which, again, I find both entirely true to life and charming to read.

 

So: Ms. Kowal, if you read this (as I suspect you will, given your adeptness at noticing mentions of your work): thank you for this series. I would have enjoyed it even without its portrayal of marriage; but that latter aspect of the series mattered to me.

(And, don’t get me wrong, I’m sure I’d also enjoy different sorts of works from you: like I said above, I really enjoyed Kiss Me Twice! Though, of course, it also has a rather nice portrayal of partnership…)

kickstarters i’m waiting for

February 20th, 2014

Here’s a list of Kickstarters (plus one GoFundMe) I’m waiting for:

waiting

One small outlier and one huge outlier. The Urban Tarot guy (“Estimated delivery: Dec 2012″) sends regular updates with new pieces of art, the art continues to look gorgeous, I’m still looking forward to that.

And Hadean Lands was funded long enough ago that the URL in that e-mail no longer works, and that the project doesn’t have an estimated delivery date. (Not sure when Kickstarter added those.) But, from the project page:

If I wanted to take six months and write a game, I could cram that into my spare time. If I wanted to write an iPhone interpreter, I could probably manage that too. That’s not how I want to run this project.

I will quit my day job at the end of December, to work on interactive fiction full-time. That means all my IF-related projects. Most of these are not commercial; they benefit the whole community.

Hadean Lands will be my day job — but I’ll be able to keep doing smaller text games in my spare time.

Heh. (And, as the updates have made clear: he’s worked on a lot of stuff other than Hadean Lands in the intervening 3+ years.)

Actually, there’s one other project I backed but haven’t received that’s not on that list: Addicube. I was going to write that that one, at least, was explicit about giving up and acknowledging that it wasn’t going to be delivered, but I actually don’t see an update on its Kickstarter page saying that; pretty sure that Corvus said that openly somewhere else, though. (And Corvus did deliver on Bhaloidam.)

 

I’ve backed a total of 31 projects on Kickstarter and received 22 of them; and I can think offhand of two non-Kickstarter projects that I’ve backed, one of which I’ve received. Around a third I backed largely because I wanted to support the person involved, and around two thirds I backed as a sort of pre-order; the latter have a solid delivery rate (they slip sometimes, but in my experience they show up eventually, though it possible something weird will happen with some of the ones listed above), while the former are quite a bit iffier on that.

I could (easily!) be wrong, but I get the feeling that people in the “pre-order” category are much better at picking an amount of money that will make a difference to their ability to complete the project, while people in the “support this person” category are asking for money to do something that they’re planning to find a way to do anyways, leading them to lowball the amount of money they’re asking for, to not think hard about the resources they need to complete the project, and/or to spread their interests once they’ve gotten the money.

Which may sound like I’m having second thoughts about backing projects where I want to support the person involved. But I’m not: I’ve already met one of my main goals in those contexts right from the start. So, while at times I’m bemused about how those projects play out, I don’t worry much about them.

And that class of projects usually produces concrete results too, sometimes wonderful ones.

games and copyright

February 8th, 2014

John Walker’s editorial in Rock Paper Shotgun on “Why Games Should Enter The Public Domain” was going around my Twitter feed the other day, frequently coupled with Steve Gaynor’s response. And what I appreciate about both of them is the pragmatic tack that they take: I think that the U.S. Constitution has it right by saying that Congress has the power

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries

The Constitution doesn’t present some sort of moral right for people to control how people use their works, it doesn’t make a facile comparison of copying/transforming works with taking physical objects, it instead gives a pragmatic choice: we want as much good stuff to be available as possible, so if limited monopolies are useful to that end, then let’s grant them, but only to the extent that they are useful to that end.

And yes, that word “available” is my editorializing: I don’t care about artistic/scientific progress in the abstract, I care about it to the extend that it enriches people’s lives. In particular, if two solutions lead to similar numbers and qualities of works created, then I’ll vote for the one that makes those works more easily accessible, with the public domain being the most obvious means to that end.

 

So, with that common ground in place: John Walker argues for a 20 year copyright for games, Steve Gaynor argues for longer than that, with the justification being that companies the incentive of windfall long-term successes to justify speculatively investing in new works. But he wants a shorter term of protection for the ideas themselves. His own example is System Shock: he wants the game itself to remain under copyright protection, but he still wants to make a sequel to it.

Or rather, System Shock is his example for reduced protection for ideas: his example for the benefits of copyright protection on works is the music he licensed for Gone Home. But when I look at that juxtaposition, I don’t support his recommendation. Because the thing is: that music is still available, and we have a relatively robust system for publishing music that leads to vast quantities of music staying in print. The exact opposite is the case for video games, however: the proportion of games still in print from 20 years ago pales in comparison. And System Shock is a perfect example: I recently replayed it, and I would have been perfectly happy to have paid somebody money for a new copy (preferably one that Just Works on my machine), but I was unable to do so.

And, to the extent that I buy Gaynor’s argument that games publishers need the incentives of the possibility of profiting from a work 20 years later, I think it leads to the opposite of what he recommends: I think it’s quite rare for individual games to be a significant source of profits 20 years later, but I don’t think it’s as rare for characters or series to be to be a significant source of profits 20 years later by spawning new games. (That situation is still hitting the lottery, but the odds and profits are both better.)

 

What I really hate is art being unavailable: I like the public domain more than most, but to me it’s just a means to an end, and if we can find other means to that end, great. Art can be unavailable (for new purchase through legal means) because it’s an orphan work, because too many people are involved in rights to it, or because rights-holders don’t want to make it available for resale: any solution should deal with all of those problems, and the third in particular is a much more serious issue for video games than for most other forms of media. Gaynor lists music as a success, but rights-holder who wants to make music available for sale just has to give, say, CD Baby or Bandcamp a few bucks and a few MP3s, and poof, it’s available. Whereas if Sega wants to make Shenmue available, it’s a whole other story: they have a version that runs on the Dreamcast, but selling that would do so little good as to be completely pointless for them, and porting the game to newer platforms would cost them time and money with very uncertain return.

So, faced with that, public domain seems like a pretty reasonable solution to me. But it’s hardly the only solution: compulsory licensing could work well too, for example. Just spouting ideas off of the top of my head: given that there are real costs in making a game available in new platforms, let other people take on those costs and share in the rewards, paying a royalty fee to the original rights-holder. And if nobody is willing to do that or if there’s no registered rights-holder, then let people redistribute it for free. That sounds to me like a clear win over letting works stay inaccessible through legal means in perpetuity, as is effectively the case in the United States today.

Also, we need to deal with the fact that, in the presence of the possibility of digital reproduction, any sort of copyright regime creates a world of lawbreakers: I’m not saying that we should throw up our hands and give up on copyright entirely (though I’m also not saying we shouldn’t!), but we should accept that copying a game or a book or an MP3 is a minor offense at best, and set the punishment accordingly. (Which would, I imagine, come back to another form of compulsory licensing.)

And really, we need to find solutions to these easier cases, because there are harder ones coming along: more and more games and art-forms are going to be server-based, and dealing with orphan works in that context is a lot more difficult.

downcast and castro

February 4th, 2014

A little over a year ago, I switched from Apple’s Podcasts app on the iPhone to Downcast, and I’m very glad I did: it was much easier to make sure that the podcasts I wanted were available when I wanted them, and to listen to them in the way I wanted. And, given that I spend most of an hour each weekday listening to podcasts during my commute, this is important!

Downcast

Having said that: Downcast isn’t perfect. It’s a little too clunky, it’s got a few too many options. (My favorite one is “Portrait Lock”: the options are “Disabled”, “Enabled”, and “Enabled (Prevent Upside Down)”. Who wants portrait lock to be enabled but wants the screen to flip when the phone is upside down?) Also, on a practical front: all of those controls mean that there’s not much room for the liner notes (I have a 4S, it’s probably a little better on the larger screen); which is fine but suboptimal most of the time, and actively annoying when listening to foreign language podcasts where I like to read along to what is being spoken.

Downcast-Notes-Area

That red rectangle is the liner notes area; it’s just over a third of the height of the screen, only about 50% larger than the controls beneath it.

 

So I figured there’s room out there for an app I’d like a bit more: one that’s a little more opinionated and focused a bit more on elegance. I saw a few new recommendations towards the end of last year, and Castro looked interesting, so I thought I would give it a try.

Castro-Light Castro-Green

Right from there, you can see one of Castro’s distinctive stylistic choices: the background changes color based on the podcast’s album art, which turns out to be rather lovely. Also, there are a lot fewer controls.

Specifically, here are all the controls that Downcast shows you:

  1. A Back button.
  2. Buttons to skip forward and back by small amounts.
  3. A slider showing your current position in the podcast.
  4. A button to forward a link to the current podcast to various services.
  5. A button to loop.
  6. Buttons to play/pause, and to go to the beginning/end of the current episode.
  7. A button to control the playback speed.
  8. A button to set a sleep timer.
  9. A volume slider.
  10. An airplay button.

What does Castro do with these? In order:

  1. The back button is omitted: you have to swipe right to go back.
  2. There are no small amount skip buttons. The small skip buttons are next to the play button, and can also be used to fast-forward/rewind if held down.
  3. The current position slider is there (it’s at the top of the black area at the bottom), but slimmed down and with an interface that took me a little while to be able to operate properly.
  4. There’s no “forward a link” button.
  5. There’s no “loop” button.
  6. The play/pause and beginning/end buttons are there button is there, but there’s no beginning/end button.
  7. There’s no playback speed button (but it’s controllable on the per-podcast settings, or by holding down the play/pause button).
  8. There’s no sleep timer.
  9. There’s no volume slider (but of course the standard physical volume buttons work.
  10. There’s no airplay button (but the standard iOS airplay control is accessible by sliding up).

So, of the ten sets of controls, four three are omitted entirely, four have the functionality accessible in a different way, and two three are present. This leads to a much less cluttered interface, and one that works almost as well. (Incidentally, Downcast has a second hidden play/pause button, accessible by double-tapping anywhere; I quite like that, it’s useful when fumbling.)

 

And, because of this (and because they shrunk down the height of some of the remaining elements), there’s lots of room: room enough to have a picture of the podcast at the top (which is actually a button that lets you get to per-podcast settings and a list of other episodes), and room for that big text area in the middle. It’s not obvious from the above screenshots, but the entire middle area is scrollable:

Castro-Notes-Area

giving you almost twice the vertical space for liner notes. Which is great! Except that, ironically, it isn’t: Castro shows you notes from the RSS feed (I think), but doesn’t show you the notes from the lyrics section of the audio file (again, I think). Whatever the difference is, it means that Castro doesn’t show the full notes for Japanese Pod 101 / Chinese Class 101: so, for exactly the podcasts where I want that extra space, I don’t have access to the text I’m looking for! Sigh.

 

What about the other controls that are missing? Some I don’t care about in the slightest (sleep timer, volume slider). I think it’s a little silly to remove the dedicated back button, but the swipe is easy enough to do. The “forward a link” button I actually occasionally use, but not enough to miss it; personally, I wouldn’t mind if it were there (maybe put it in the upper right and a back button in the upper left, flanking the logo?), but I don’t feel strongly about that.

What I do feel strongly about is the “skip a small amount” buttons. I realize that (many) podcasters want to make money, and while I’d prefer to donate money to them directly, I can certainly understand why most choose ads. So I will always listen to an ad for a company the first time I hear it, probably the first few times; but, honestly, the tenth time I hear the same ad, it’s not helping anybody for me to listen to it. (And the position slider isn’t close to being a replacement for the skip buttons, it’s way too finicky.) Also, there are some podcasters who ramble on about stuff that I don’t care about: I want to subscribe to those podcasts because there’s stuff in them that I find valuable, but I also want an easy way to skip over a tangent that I just don’t care about. So, the result is that Castro wastes my time.

Edit: Whoops, that functionality is there: somehow I got it in my head that the buttons surrounding the play/pause button go to the beginning/end, but in fact they’re small skip buttons (15 seconds back, 30 seconds forward), and if you hold them down then they let you go at high speed through larger chunks. I’m not sure what fumbling I did to make me think that they did something else, but I should have verified that before writing this! (I don’t find the icons super evocative of “small skip”, but they’re also not evocative of “skip to end”, and I don’t have a better suggestion for icons.)

Castro also wastes my time in another way: instead of having three faster-than-real-time speeds and one slower-than-real-time speed, like Downcast does, it has two of each. I find having two slower-than-real-time speeds just odd: I can’t imagine who the target audience is for that one. And there is a significant minority of podcasts that I listen to on Downcast’s fastest speed (labeled as 3x, but actually 2x): again, there are podcasters who don’t edit out their ramblings and hesitations (usually the same podcasters for whom a one hour show is short), and with podcasts like that, my choices are either to listen to them on top speed, to have a single episode take up a third of my weekly podcast listening time, or to stop listening to them entirely. I prefer the first of those choices, which means Downcast.

 

So: Castro is great for, say, shows like Planet Money with excellent production values and few/no ads; but if I’m listening to Back to Work, I really want the extra flexibility that Downcast gives me. And, actually, I think it would be just as easy for Castro to give me that same flexibility: they have a menu of five playback speeds, they’re just the wrong ones, they can switch that to the right ones. As to the small amount skip buttons: I never use the beginning/end buttons, so if those were replaced by 30-second skip buttons, I’d be completely happy. Edit: Um, yeah. I’m happy!

 

That’s how the apps behave when playing a single episode: but there’s also managing the episodes, and they’re at least as different there. When you start it up, Downcast shows you a list of all podcasts you’re either subscribed to or have downloaded individual episodes of in the past (by default, you can of course delete podcasts from that list), with the podcasts with unplayed episodes at the top. Castro, in contrast, has a podcast list that shows you all of the podcasts you’re subscribed to, with no sorting based on unplayed episodes, which I don’t find very useful at all: for me, most of those podcasts will be empty if I click on them.

But Castro also has an “Episodes” tab; that one shows you the unplayed episodes from the different podcasts, mixed together. (Which you can get with Downcast as well: they have a flexible playlist feature.) I started using that; and, once I got over the change, I decided that it works rather well, that in fact I prefer it. (At least for most podcasts—there are some podcasts where I have a lot of back episodes stored up that I want to keep around, and I’m just leaving those in Downcast for now.)

That’s for podcasts that you’re subscribed to; Downcast also has a notion of a podcast that shows in your list but that you’re not subscribed to, whereas Castro apparently has no such notion. Which I was annoyed by at first: I subscribe to 20 podcasts, but there are maybe 30 or so more that I’ll listen to occasional episodes of if I have reason to find them interesting. And I don’t want to subscribe to all of those, because that means it will take time fetching feeds and I’ll have to be constantly manually clearing out episodes I don’t want to listen to. (Incidentally, I have no idea when Castro refreshes feeds; that bothered me for a while, but it seems to work well, so it doesn’t bother me so much now. Still, pull-to-refresh would be welcome…)

Fortunately, it turns out that Castro has a workaround to that: you can go to the “add podcasts” screen and type in a podcast name, and then download individual episodes from there. If you do that, you’ll get the episode(s) in the Episodes tab but you won’t get the podcast in the Podcasts list. Which is a fine tradeoff compared to Downcast: slightly harder access to those podcasts at the balance of less clutter, and that’s fine. In fact, it ends up being the only reason why I ever go to the Podcasts tab: given that, I would stick the “Add podcast” button at the bottom instead of the top, and I’d have the app default to putting you on the Episodes tab instead of the Podcasts tab.

 

So, where does this leave me? I like the aesthetics of Castro a lot more; it’s now my default podcast app because of that. But if I had to choose one, I would still stick with Downcast: for a significant minority of podcasts, Castro’s behavior is a deal-breaker. If Castro would make three changes, though, then I would use it 95% of the time: specifically, I would like it to:

  • Give me 30-second skip buttons (preferably in place of the whole-episode skip buttons),
  • Give me a 2x listening speed, and
  • Display the full notes for Japanese Pod 101.

If it made those changes, I’d recommend it whole-heartedly.

Or, for that matter, if Downcast would change its aesthetics somewhat, I’d probably be happy to recommend it whole-heartedly as well! That’s a harder one for me to talk about, though: it does have a strong aesthetic, it’s just that that aesthetic is about configurability. So they’ll probably do better staying focused on that then on trying to be something else; and I’ll probably still continue to use Downcast for a minority of situations where those choices matter to me.

don’t take it personally, babe, it just ain’t your story

January 30th, 2014

My arc of feelings about don’t take it personally, babe, it just ain’t your story is, I think, similar to that about Digital: A Love Story: I didn’t think about it too much when playing it, but then it stuck in my head, and then we had a VGHVI Symposium about it where I had a reasonable amount to say, but then I put off writing about it for long enough that I’ve forgotten most of that. Sigh.

Comparing it to its predecessor: it’s a lot more polished, a lot more like what I would expect a visual novel to be like. Which is mostly good, but means that it loses a bit of an edge in some ways: the faux-computer interface of its predecessor had real power in its own way. But, ultimately, I got tired in Digital of constantly going through a modem connection dance, and I certainly got tired of having to randomly try stuff until I did whatever would move the story along. Whereas, with don’t take it personally, the story moved along quite nicely at its own place; online messages provided an important aspect of the story, but they were much more of an important alternate point of view instead of a gating requirement. Except for the once-per-chapter scenario where the game forced you to connect to a message board that you never looked at otherwise: that was a gate, and one that seemed both out-of-character and out-of-place, though maybe I’m wrong about both of those: maybe it’s out of character for me but in-character for the protagonist, and I do wonder how many of the people on 12channel are also students in the school, hence providing yet another perspective on their lives?

And that question of what’s in character for me versus the protagonist was an important one: fairly early on, I felt that I was presented with a choice of ways to act where none of the ways felt right to me, and I didn’t like that. Which, thinking back on it, is weird: when do I ever see a game where I can act like I would in real life? But the answer there is perhaps that, in most games, I’m presented with characters who bear no resemblance to me, in settings that are nothing like my life, facing problems that are like nothing I ever see. Whereas, in this game, the protagonist is a teacher; I have been a teacher, which makes it easy for me to see the disconnect. Part of the reason why Dragon Age II had the impact on me that it did was that it at least gestured towards shrinking down some of its scope to a personal level; but don’t take it personally, babe, it just ain’t your story takes that much farther, of course. I’m really not sure why games are so drawn to overblown plots: to me, they seem to work against any sort of emotional connection.

In addition to seeing how that concept of disconnection with the protagonist plays out differently in different games, we can compare games to books: I’m used to books presenting protagonists who aren’t much like me, and I don’t generally pull back from that, either? But there’s something different about being forced to make a choice between actions you don’t agree with, in situations where you really would do something different. At any rate, that discomfort didn’t last too long, though; eventually, I mostly just went along with it, stopped taking it personally. (And it’s not like the game is heavy-handed about forcing you to make weighty choices or anything.) Mostly, I was just reading a story, just with more different perspectives on what was going on than I would see in a book.

And then I came to the end: in particular, the shadow play and the infodump. In the Symposium, we spent a while talking about the infodump, and I ended up defending it. My take is this: we have this shadow play, put on by two students whom the protagonist respects and thinks are neat kids but isn’t sure what they’ll come up with. And the shadow play isn’t at all what he expects; so he’s not sure whether it’s pointing at something interesting or just junk. So then the students have to take him by the hand still more patiently and explain to him what’s going on: there are layers everywhere, and what he saw as naive behavior was actually performance. In that context of needing to explain, an infodump makes sense.

Also, the context of the infodump as being done by smart high school students locates it in an interesting place for me. Because these are students who are trying to construct a view of certain aspects of the world; and they’re old enough and smart enough to do a good and interesting job of that. But that doesn’t mean that they’re right: so if we have a warring point of view of the protagonist seeing himself as secretly spying on the students and the students seeing themselves as knowingly performing for the teacher (while being one up on him), they can try to construct a narrative that presents the latter to the exclusion of the former. And, well, the students are probably more correct than the teacher is in this instance, because they’re wiser in the ways of social media, but that doesn’t mean that they’re masters of the situation, either: uncomfortable truths can appear as part of social media self-presentation, acts can get out of hand with real-world consequences. So the infodump, from that point of view, makes sense to me as a rhetorical play by the students that isn’t entirely successful or accurate within the world itself; that, to me, is more interesting than infodumps that I sometimes see that I only manage to interpret as the author sticking information in.

And this rhetorical play, this contest comes out in the title of the story, too. I, the player, shouldn’t take things personally, because this isn’t a story about me. And the protagonist also shouldn’t take things personally, because the students’ interactions and lives are about them, they’re not about him. There is, of course, truth to both of those points of view, but it’s equally true that the power of art comes in how it bears on you, the person reading / experiencing it; and a teacher is part of a classroom, too, and given how many hours students and teachers spend together, he’s part of students’ lives. So, in both of those contexts, the title of the game has some real truth to it but fails as an absolute (no surprise, because after all what artist wouldn’t want us to take her art personally?), which makes it all the more interesting.

I’m glad we played through that game and talked about it, hopefully Analogue will come up fairly soon in the rotation.

this week in v.c. biases

January 25th, 2014

When I suggested three days ago that perhaps venture capitalists didn’t have superhuman powers to avoid bias by following the smell of money, I wasn’t expecting this gem from Tom Perkins:

Regarding your editorial “Censors on Campus” (Jan. 18): Writing from the epicenter of progressive thought, San Francisco, I would call attention to the parallels of fascist Nazi Germany to its war on its “one percent,” namely its Jews, to the progressive war on the American one percent, namely the “rich.”

From the Occupy movement to the demonization of the rich embedded in virtually every word of our local newspaper, the San Francisco Chronicle, I perceive a rising tide of hatred of the successful one percent. There is outraged public reaction to the Google buses carrying technology workers from the city to the peninsula high-tech companies which employ them. We have outrage over the rising real-estate prices which these “techno geeks” can pay. We have, for example, libelous and cruel attacks in the Chronicle on our number-one celebrity, the author Danielle Steel, alleging that she is a “snob” despite the millions she has spent on our city’s homeless and mentally ill over the past decades.

This is a very dangerous drift in our American thinking. Kristallnacht was unthinkable in 1930; is its descendent “progressive” radicalism unthinkable now?

Tom Perkins

San Francisco

Mr. Perkins is a founder of Kleiner Perkins Caufield & Byers.

Wow. I mean, I suppose it’s possible that VC’s only have blind spots about their wealth-based privilege, not their gender-based privilege, but that does not seem likely to me. (I assume that Perkins thought favorably of the editorial he cited, after all.) And this isn’t an off-the-cuff interview, this isn’t him quoted out of context: this is a letter he wrote to a national newspaper.

I’m not surprised that Kleiner Perkins immediately distanced themselves from the letter. And I assume that this kind of view is extreme within the VC community. But I don’t know; it’s a business that comes down to personal judgments expressed behind closed doors, so the rest of us don’t have a lot of direct insight into what the industry is like, what motivates its leaders.