[ Content | Sidebar ]

my first netrunner tournament

September 13th, 2014

About a month ago, I found that my current Netrunner decks were not only doing well when playing against friends, but doing well in a way where I felt surprisingly in control, like I had good options to guide the game in different situations. Still, I was always playing the same people (the same person, to be honest), largely against the same decks, so I didn’t really know how the decks were doing, how I was doing.

I probably should have given the decks a good try on OCTGN, but I just have not stayed in the habit of playing there: I’m not finding the time, and I’m always worried that something is going to go wrong with my networking setup. (Or more likely with Comcast’s.) So I decided to look for local tournaments instead; a local store called Game Kastle has a tournament once a month, it turned out, so I figured I’d give them a try.

Today was tournament day; I went, it was a lot of fun! It went five rounds, with me playing Corp once and Runner once in each round. I split three of them, lost both games in one of them, and won both games in one of them. In the round where I lost both, I was clearly the worse player; in the round where I won both, I was clearly the better player; and for the other three, I felt like I was the worse player but surviving pressure in one of them, and a little more in control against the other two. Which all added up to: I certainly felt like I belonged there, but I also have a lot to learn about the game.

Mechanics wise: it definitely felt a little more pressured than my normal games, but not horribly so? I didn’t go to time in any of my games, and I can’t think offhand of situations where I played the wrong move because of time pressure. I wouldn’t want to play against a clock all of the time, but having to spend an afternoon focusing more than normal was useful practice.

And it helped that people were super nice. (I’ve heard this a lot about Netrunner tournaments.) Basically, we all seemed to be responding to time pressure by helping each other out: people reminded me when I forgot to add virus counters to my Parasite, and I’d remind people to add Datasucker tokens after runs. Also, the tournament organizers were nice and generous, and even though I was in the middle of the pack, I did get an alternate art Aesop’s Pawnshop.

So: good choice! I’ll tentatively plan to go every month; if something else is happening that weekend, no big deal, but it will be there as a default. And now that I’ve tested this deck, try to work on more decks; I’ve bought yet another core set and another deck box so I can keep this deck assembled while trying out more experimental decks. And I definitely need to experiment more: I feel like I have a decent idea about traditional play based on what’s in the core set (and, incidentally, I find it heartening just how playable core set cards continue to be), but I need to get a more visceral feel for other possibilities.

the walking dead, season two

September 7th, 2014

(I don’t normally do spoiler alerts here, but given how recently this game came out, I’ll say: spoiler alert.)

When playing the second season of The Walking Dead, conversations felt very different to me than in the first season. When playing the early episodes of the first season, I treated conversations with a straightforwardly egotistical point-and-click style: I was either getting information or figuring out what branch I wanted to go down, and either way my choice was all about me and how others would see me.

About halfway through that season, though, my conversations got more nuanced: I stepped away from an instrumental view and started thinking about them more as, well, conversations. And conversations with a much richer potential flow than I was used to in a video game: in particular, I stopped exclusively seeing the timer as pressure that I would always respond to, and started seeing “don’t respond” as an affirmative choice. In real life, I wouldn’t always need to get my two cents in after somebody says something (though, to be honest, I almost always want to, but that’s a character flaw!); eventually, once the game helped me unlearn some habits, I realized that I could make the same choice here, and doing that occasionally made my interactions richer.

In Season Two, I made the “don’t respond” choice a lot more often. Because, over and over again, I got the feeling that the conversations really weren’t about me: they were about somebody else processing a horrific experience they’d just had that built on a sequence of prior horrific experiences over the previous couple of years. In a situation like that, they didn’t need me to inject myself, to make it about me; they generally needed me to listen and occasionally make supportive noises. (Unless, of course, shit was in the middle of going down, in which case that’s still what they needed but it wouldn’t help either of us right then for me to act like a therapist.)

Frequently, it wasn’t about me even when they were talking to me, accusing me. Kenny after losing Sarita, for example: for the second time, he’d lost his wife, and I’d not only been there while she died but had cut her arm off! So yeah, he’s going to be plenty pissed at me; he was really pissed at the world and overcome by grief, of course (he even admitted as much in the next episode), but right then, he needed to yell at me as a proxy for the world, and my yelling back wasn’t going to help.

 

So that’s one way in which conversations changed: so many conversations were tips of an iceberg with years of horror under the surface. Which is, more broadly, a way in which the second season differed from the first: in the first season, the apocalypse had just started, and you were in a narrative trying to hold onto hope. Whereas, in this season, the apocalypse was the new norm: we know that people who are here today may be gone tomorrow, we know that that may happen at the hands of zombies, at the hands of humans, or at the hands of a lack of resources. The new norm, but not a norm that we’ve learned how to deal with; indeed, not a norm that it’s clear it’s possible possible to deal with.

My Clementine was better at dealing with the new reality than most people, at least. I played her as a surprisingly self-aware and pulled-together child (which, of course, she is): people see her as a child, but rather than either giving into that or resisting that with protestations of how grown-up she is, she’ll respond in whatever way seems most likely to lead to a good outcome in the interaction in question. And not necessarily simply the best outcome for her personally: she realizes that sometimes other people need to hang on to a bit of normalcy, to treat kids as kids and to treat themselves as competent adults. (I loved Bonnie’s repeated returning to her gift of the jacket!) But every time my Clementine acted like a kid in an interaction, it was a conscious choice.

And, of course, it’s not like other people really thought of her as a normal kid. In fact, it frequently was almost explicit that she was the real leader of the group: they’d need something done or they’d need to make a choice, and everybody looked at her as if her opinion was decisive. This didn’t feel to me like a videogamey “the protagonist is always the leader” thing: this felt to me like a desperate and exhausted group of people that sometimes needed to give up the reins. So it was up to Clementine to sneak around the mall, Clementine to talk to Kenny when he’s at his darkest, or even up to Clementine to decide whether to travel today or tomorrow. (And we got to see Carver and Kenny as alternate versions of leadership.)

 

Just when I was getting used to this version of Clementine, Jane showed up. She was the non-Clementine character who interested me the most this season: she was the only person who seemed able to navigate this new world on its own terms. Yes, there’s a horde of zombies approaching, but that doesn’t mean that you have to put up a commensurate resistance to them or die trying: you can instead cover yourself with walker guts and walk right through them if you’re careful. That’s her way of dealing with the zombies: her way of dealing with other humans gave me rather more pause, but given what the game showed us this season, I couldn’t say that she was wrong to try to detach as much as possible. But I was also glad that a detached persona wasn’t all that we saw of Jane, that her interactions with Clementine showed that she could still care about people and that, with Luke, she could, uh, acknowledge her physical need for human interactions as well.

I was still trying to figure out the implications of Jane as potential role model when the fifth episode showed up. And that episode, honestly, went off the rails for me right from the beginning. We’d ended on a cliffhanger with a group of Russians showing up as a major threat with very little context; the new episode defused and got rid of them without giving any more context. And now we had an orphaned newborn baby with us: a baby who was almost certainly going to die soon and who would probably be a drain on resources in the meantime; the game had built up to the baby enough that I could understand why Kenny would be incredibly protective towards it, and I could accept that some of the other people would feel that same way. But I as player didn’t feel any particular affection towards the baby, I was pretty sure Jane also wouldn’t, and I wasn’t convinced that Clementine would let herself get too attached to the baby, either.

So, while I could accept Kenny being protective of the baby, pissed at Arvo, and as domineering as always, I also felt that, at this point, Kenny was pretty clearly unhealthy to the extend of being an active threat. And I felt that Clementine was self-aware enough and had learned enough from Jane that, even though she cared about Kenny because he was the person around with whom she had by far the longest history, she realized that it wasn’t at all a good idea to stick with Kenny. Yet the game not only kept us right with Kenny until the end, it did so with a very odd quadrilateral of Clementine, Jane, Kenny, and the baby; it is of course the case that the other remaining characters were window dressing, but the way in which they left the scene felt quite odd. And what felt even odder was Jane’s behavior around the baby at the end: I simply think that Jane wouldn’t have cared about the baby, she would have let Kenny do whatever he wanted with the baby instead of lying about it.

That meant that the game set up a climactic fight and choice, but did so with a buildup that felt quite off. (In stark contrast to the climactic choice in the first season.) And then it backed off a bit and ended in what seemed a reasonable enough manner, but decided that it had to throw in some scary music when we learned that the father of the family we let in has a gun. Yes, we get it, random people are scary; but you don’t have to throw that in our face, and everybody who is still alive at this point has a gun!

 

A really good season most of the way through, and one that was very good in interestingly different ways from the first season. But it really stumbled at the end, and did so in a way that left me not very optimistic about a possible third season. Hopefully the developers will surprise me; or maybe they’ll just leave the series be, I’d prefer that to clumsily tying everything up in a bow.

insect stings

September 3rd, 2014

I first got stung by a bee (or yellow jacket or wasp or something) at a math camp when I was 16 yearrs old. I remember thinking, “Oh, I’ve never been stung before, I guess that’s what it feels like! I wonder if I’m allergic?”, and then five or ten minutes later, no longer having to wonder about that latter issue. I can’t remember all of the details: loss of vision, getting driven in a car to a nearby hospital, probably passing out at some point, and then getting drugs that took care of things. (Though the Benadryl the next day knocked me out in the middle of a class, I think I barely managed to make it back to my dorm room?)

Since then, I’ve carried around an Epipen with me. At some point (I think in grad school?) I got tested, and I was still quite allergic. But I haven’t actually gotten stung in the intervening 27 years.

That all changed when I was walking home from the train station today; I felt something sharp on my arm, I looked down, and there was a yellow jacket. Oops. I figured it would be better not to deal with this alone in case I started passing out, so I called home, asked Miranda if her mother was there, was told not yet, said “shit” and hung up. (Not the most reassuring call I’ve ever made.) Then I tried calling Liesl at work but didn’t get an answer, so I called Miranda back, asked her to meet me on her bike with her cell phone, and sent Liesl text messages explaining the situation; Liesl actually got home as Miranda was leaving, so she showed up in the car just after Miranda did.

I thought about using the Epipen while waiting for Miranda to show up, but I was still not feeling awful and using the Epipen involved stabbing myself in my thigh, so I figured I might as well wait until I got home to do that instead of partially disrobing on a sidewalk or in a park. And then I got curious: just how allergic am I these days? Given that I wasn’t seeing any serious reactions yet, just a bit of pain and maybe a bit more sweating than normal, I figured I’d wait a few minutes before stabbing myself. Liesl had some Zyrtec with her, and she said it was good for skin reactions for allergies, so I popped a couple of those instead; she also made a baking soda poultice to put on the sting.

And then I sat down and waited: I wasn’t feeling wonderful, but my vision and breathing were totally fine, and I wasn’t at all convinced that the problems I was feeling other than arm pain weren’t just nervousness. The arm pain got a little worse, but not horrible; 30 minutes later, I was still not feeling great but no worse, whereas the first time I’d gotten stung I’d probably already arrived at the hospital by that point. And, two or three hours later, I’m basically totally fine: a tiny bit of residual arm pain, but even that’s almost gone, and everything else is normal.

So: yay. Either I’m not as allergic as I used to be or I got stung by an insect that I’m less sensitive to (though my memory of the prior test that I took was that I was quite sensitive to all the insect types I tested) or Zyrtec is a super awesome drug (not out of the question, allergy drugs have gotten a lot better in the intervening decades). Whatever the answer, I’m not complaining; and I’ll definitely keep a bit of Zyrtec in my backpack as well as the Epipen so I can take that immediately the next time I get stung.

bachsmith

September 1st, 2014

This week’s Rocksmith DLC was a collection of classical music arranged for guitars / bass / drums; I wasn’t sure what I’d think about it in advance, but I gave it a try yesterday and it was a lot of fun! I was worried that it would be over the top, trying to turn classical music into rock; but Ride of the Valkyries was the only piece that went particularly far in that direction and, honestly, I can’t blame them for that particular choice.

So the arrangements turned out to be pleasant to listen to; and, musically, they were interestingly different from the norm for songs in that game. A lot of the game’s music consists of chords and ostentatious guitar solos; I don’t particularly like the latter, and while I like the former fine, sometimes I want a change of pace. In contrast, the songs in this pack were a mixture of much less ostentatious melodic bits (except, of course, for Ride of the Valkyries) and arpeggiated sections that changed chords fairly frequently. So: the songs were fun to play, and I’ll probably keep on working on them.

The one exception was the Little Fugue. The arrangement was fine (or at least fine-ish, there were a few rock touches that I didn’t appreciate), but the performance was super heavy and plodding. But, in its weird way, that actually ended up being good for me, too: I know what I want that piece to sound like, and it wasn’t going to sound that way unless I went out of my way to make it so. So that gave me something to strive for that other songs in the pack didn’t.

Or, indeed, that other songs in the game don’t. Because this points at something I hadn’t really realized about Rocksmith: just how little I’d been working on my approach to songs. I try to make my playing sound good, not just get the notes right, but basically all the other songs available to me already sound good: they’re chosen because they’re great songs in their iconic performances! So the lead guitar track in the song is going to be great to listen to; and that in turn covers up a lot of flaws in my performance. Whereas the versions in this pack are arrangements of songs that have been recorded thousands of other times, and where this arrangement, just by the nature of the instrumentation, is not going to sound particularly canonical. That leaves a lot more room for me to think about the arrangements, and whether I want to play like that; sometimes the answer is yes (the arrangement of Rondo Alla Turca is charming, albeit with a way too straightforward beat), and sometimes the answer is no.

This also reinforces something I’d been aware of for a while: it’s time for me to significantly dial back on playing random songs the game throws at me, and to get back to focusing on improving a handful of songs. Playing random songs helped me a lot for years, but I’ve long since reached the limits of the game’s current recommendation engine (which really is not very good, I hope they focus on that for the next iteration). And even if the engine were better, it’s still time for me to focus on musicality more: it’s time for me to try to do a good job playing real songs instead of a so-so job playing stripped down versions of songs.

current status

August 31st, 2014

I am, fortunately, doing much better than I was a month ago. The leg/back problems have improved a lot: shortly after writing that post, I went on a nine-day course of steroids, and those had an immediate huge effect. I do indeed have a pinched nerve, and have an MRI to prove it; it was pretty interesting having a back specialist walk me through that. (Though I’m not sure it was worth the cost; I’m not impressed either with the Palo Alto Medical Foundation’s cost structure or with my insurance company’s transparency on pricing.) So right now I’m doing well enough that I wouldn’t consider going to a doctor or physical therapist if this were the beginning of the symptoms: my right toes are a bit weak, and they sometimes tingle, but they’re not painful and they don’t interfere with other activities. Of course, now that I’m sensitized to the issue, I want those symptoms to go away—both for their own sake and because I want a buffer before pain starts recurring—but overall, great success.

And I’m less sleepy than I was at the start of the summer. I’m still taking a little more allergy medicine than I’d like, and I wish I could do something about that, but at least it’s manageable. So presumably I was correct that the excessive sleepiness at the start of the summer was caused in large part by construction dust. After barely blogging in June and July, I’m back to normalish levels of writing here; that isn’t a coincidence.

Which isn’t to say that I feel completely right, but it’s not for physical reasons: I just don’t know what I want to focus on. Normally I have some decent-sized intellectual area that I’m trying to come to grips with; right now doesn’t feel that way. I still have ongoing projects, but none of them are grabbing me as much as they were: I’m only practicing guitar an hour or an hour and a half a day on weekends instead of two or three hours, and while I’m continuing with Japanese, the idea of picking up those books doesn’t excite me. Which, admittedly, has something do with the specifics of what I’m currently reading: Mishima’s “Patriotism” is a story about a soldier who wasn’t invited to join a mutiny, who decides that the correct thing to do in response is to kill himself, whose wife is so virtuous as to join him, and where both members of the couple are presented as incredibly sexy and desirable; oy. I like Buddha a lot more than Patriotism, but it’s not as compulsively readable / easy to read as Hikaru no Go. But I don’t think the specifics of the books are all that’s going on there: I’ve been studying Japanese for something like seven or eight years now, I’ve made significant progress but there’s still a long road ahead, it’s pretty natural for me to be getting tired. And I’m still chipping away on the programming project I started a few months back, but I’m not super energetic about that, either.

 

So: I’m really not sure how I want to spend my time. I’m still continuing on all three projects mentioned in the previous paragraph, but more out of force of habit / willpower than because I really think they’re what I want to focus on. And I’m still playing games, but that has a pretty different texture to it than a couple of years ago, too: it’s been a while since I’ve played a forty-hour narrative game, so games end up not having clearly-defined starts and finishes, instead creeping into odd hours in my evenings and weekends.

As do other things I’m spending time on: I’m hitting refresh on Feedly way too often, for example. And it didn’t help that, in late July / early August, we often watched Miss Fisher for an hour after dinner: that only left me with an hour or so until I’d start thinking about going to bed, and I have a hard time convincing myself to work on something meaningful when given less than two hours. What probably would make me happier is if I could group those smaller activities: if I have energy after dinner some evening, then I should dive into something bigger (programming, a blog post, whatever), while if I don’t have energy (which is just going to happen a couple of times a week, if for no other reason than that I might have gone to bed a bit late the previous night), then that’s when I should go through my blog reading backlog and play some Hearthstone.

And then there’s work: it’s generally happy these days, certainly my work-related mood swings have leveled out noticeably. But work is happy in a way that isn’t leading to outside-of-work excitement (or even necessarily active excitement at work itself): in particular, it’s not currently pointing me in the direction of my next interest.

 

I dunno. Things are fine, and in many ways actively good. And I really like how taking the train to work means that I’m reading noticeably more! I’m just a little more adrift than normal.

false equivalence and maintenance of privilege

August 27th, 2014

On Sunday, the New York Times wrote an article about Michael Brown saying, among other things:

Michael Brown, 18, due to be buried on Monday, was no angel, with public records and interviews with friends and family revealing both problems and promise in his young life. Shortly before his encounter with Officer Wilson, the police say he was caught on a security camera stealing a box of cigars, pushing the clerk of a convenience store into a display case. He lived in a community that had rough patches, and he dabbled in drugs and alcohol. He had taken to rapping in recent months, producing lyrics that were by turns contemplative and vulgar. He got into at least one scuffle with a neighbor.

Which is a vile paragraph for the Times to have published. I can only imagine that what was going through the author and editor’s heads was a desire to appear balanced: the next paragraph said

At the same time, he regularly flashed a broad smile that endeared those around him. He overcame early struggles in school to graduate on time. He was pointed toward a trade college and a career and, his parents hoped, toward a successful life.

But this sort of “balance” is singularly inappropriate: there’s a good reason why we use the term “eulogy”, coming from a phrase meaning “speak well”, to refer to writings about people who have just died. Situations where it’s appropriate to break that rule are few and far between; talking about a kid who was just the victim of police murder (followed by weeks of terrorism committed by multiple levels of police, no less) is not one of those exceptions.

This is a classic case of false equivalence: when writing a story about a politically touchy issue, the media likes to find two sides of the issue to present, and to present those sides without any sort of context that might cause one to evaluate one side more favorably than the other. It doesn’t matter if one of those sides is supported by essentially all experts on the subject while the other is only supported by loons or guns-for-hire; it doesn’t matter if one of those sides is engaging in behavior squarely within our political norms while the other side is doing historically unprecedented attacks on the very concept of majority rule; and, as here, it doesn’t matter if one of those sides is behaving with simple compassion while the other side is lacking even a shred of simple human decency. False equivalence demands both.

 

Our dominant non-right-wing media outlets are beyond hope on false equivalence in the political arena. In more personal stories, they aren’t in general, but of course we know what’s going on here. Michael Brown was black, and my country’s paper of record knows what sort of story it’s expected to tell in that scenario, what sort of images to invoke. I of course have know way of knowing for sure, but my assumption is that this choice of the type of wording for the article wasn’t even an active choice by the author: that’s just what came out of their subconscious.

And, whether conscious or subconscious, that choice is very well grounded indeed. See, for example, this analysis of whom the Times labels as “no angel”: you basically either have to have done horrific crimes or be black. Or, for another example, see this comparison of the above story about Michael Brown with one on one of the Boston bombers; the Boston bomber story isn’t from the New York Times, but I don’t think that weakens the power of this comparison.

That’s part of the evil of privilege: it’s so pervasive that even our subconsciously constructed phrasing works to actively maintain it, to bolster it so strongly that just trying to get to a position where we can start looking behind that privilege is exhausting. But that’s only one part of that evil: there’s plenty of active maintenance of privilege out there, of people actively and intentionally using it to help themselves and to harm, even kill, others, as Ferguson has given us endless evidence of.

 

Last week, indie game developer Zoe Quinn was a target of focused and sustained harassment. Harassment which is still ongoing for her, and which has spread more broadly. That led to many reactions, including one from Kotaku that contained the following paragraph:

We’ve long been wary of the potential undue influence of corporate gaming on games reporting, and we’ve taken many actions to guard against it. The last week has been, if nothing else, a good warning to all of us about the pitfalls of cliquishness in the indie dev scene and among the reporters who cover it. We’ve absorbed those lessons and assure you that, moving ahead, we’ll err on the side of consistent transparency on that front, too.

Yes, the last week has shown one downside of “clicquishness in the indie dev scene”: if you’re friends with a bunch of other indie devs, and if 4channers hack one developer’s account, then you have to start being very careful about what links you click on in Skype messages. That downside, however, is not what Kotaku was talking about.

To be honest, I’m not entirely sure what Kotaku was talking about; that whole post of theirs made very little sense to me. But I suspect that it’s another example of the false equivalence trap that I talked about: this is a big story that some people would like to present as a controversy, so Kotaku felt that it needed to take both sides seriously, and use the controversy as a way to present themselves in a good light by doing some introspection.

 

Again, though, it’s not just false equivalence here: it’s privilege that’s shaping every part of the discussion. They didn’t do just any sort of introspection here: for example, they didn’t do introspection prompted by interactions between AAA game developers that provide direct funding to Kotaku and that have overwhelmingly male leadership, employees, and audiences, or introspection prompted by male editors hiring and publishing more and more men from their circle of friends. (I actually suspect Kotaku has been introspective about the latter in the past, admittedly, that’s the only reason why I visit their site at all.) Instead, they decided that the appropriate target for their introspection was their interactions with a platform that leads to donations of single-digit numbers of dollars to marginalized voices, with those donations leading to no significant gifts in return.

So here, too, we see privilege working at a subconscious level (at least I’d like to hope it’s subconscious!) so that just fighting to reach a level playing field is exhausting. Which would be bad enough if it were just subconscious, but in this battle as well, we see much more terrifying active attacks from people trying to maintain their dominance: trigger warning, but here’s a sample of what Anita Sarkeesian is seeing.

 

Fuck all of this. Can haz revolution plz?

apple tv business model

August 24th, 2014

It’s getting to Apple product announcement season, which means that there’s a decent chance that an appified version of the Apple TV will be announced. There’s a lot that’s obvious about it (it’ll run iOS, its hardware will presumably close the gap with iPhone / iPads), and there are some big questions (what’s the input method going to be, in particular), and, knowing Apple, there may be a complete surprise somewhere.

What I’m wondering, though, is: what’s the business model going to be for the device? The current version is priced at $99: basically, it’s priced like an accessory to your iPhone and iTunes. And I imagine the hardware is cheap enough that they can get a reasonable profit margin for the current hardware at that price, though who knows.

I would also imagine, however, that that profit margin will disappear (and possibly go negative) if its guts get noticeably ramped up. (Which will be necessary if they’re really trying to take a swing at the core game console market; I’m not 100% convinced that Apple is going to do that, but I think they probably will?) And I also imagine that there’s more to the profit margin than just selling price minus manufacturing cost. And, of course, it’s not at all given that a more powerful Apple TV will stay at the $99 price point, though if the only option is, say, $199, then that will reduce its effectiveness as an iPhone / Apple content ecosystem accessory.

 

I could be wrong about the margin disappearing if the guts get more powerful: Ben Thompson presents some figures based on an IHS report that suggest that, actually, an Apple TV based on current hardware would cost about $99 to make. Of course, Apple likes their profit margins, so I don’t think they’d actually sell it at $99; but something in the $149 – $199 region might work? And maybe they’ll keep around the current version for people who really just want an accessory? I’m not sure.

The other aspect of pricing is actually the one I’m curious about. While grocery shopping today, I was listening to John Gruber, among other things, blast traditional windows PCs for the crapware that comes with them; that’s mercifully absent on most Apple devices, but the one big exception is the Apple TV. Not that the Apple TV comes with anything that’s as bad as Norton nagware, but still: the device is full of stuff that I don’t care about, that I would never install and don’t want on my screen. And I assume that Apple is not doing this out of the goodness of their heart (though admittedly, in the absence of an app store, it does help users to have some of this), I assume money changes hands.

But I also assume, based on Apple’s past behavior, that most or all of those third-party apps are going to stop coming by default with an appified version of Apple TV: people can download what they want. What I’m not sure about is if Apple cares about that. It does feel to me like, to a larger extent than on the iPhone / iPad, the most important apps on an Apple TV are going be ones where Apple doesn’t have a natural connection to the revenue stream, because they’ll be free and accessible via non-Apple subscriptions, so the crapware money would be going away.

Maybe I’m wrong about that last sentence, though: because my understanding is that the Apple TV Netflix app does provide an option to subscribe to Netflix via your iTunes account? So, if that’s the case, maybe that will be the norm for media apps on the Apple TV: they’re accessible via third-party subscriptions, but Apple will require vendors to provide an option to subscribe via your iTunes account, and because of inertia, Apple will actually make a quite decent amount of money through that? That makes sense now that I type it out; I’m curious if Apple takes a 30% cut from subscriptions like that or if they take a smaller one. And it certainly beats having crapware preinstalled.

 

So, probably no big mystery here: the device will be cheaper to make than I initially thought, Apple will raise the price enough to get a decent profit margin, and the crapware fees will turn into more up-front subscription fees? I’m still curious how it will compete as a game machine: I’d imagine it will be significantly cheaper than either the Xbox One or the PlayStation 4, and I imagine that much of that is because of significantly lower performance. (How does the A7 GPU compare to modern PC GPUs?) Metal will help compared to previous iOS versions, but I assume that’s just bringing iOS up to parity with the console world in terms of architecture tax. And, hey, if lower GPU performance means that the latest shooters don’t work on the Apple TV, that’s perfectly fine with me…

There are potentially other ways in which iOS 8 will help the Apple TV: Muttering suggests that app extensions will offer some interesting controller possibilities, and Macworld raises HomeKit possibilities. That latter article in particular gives other reasons why Apple might be willing to keep the margins a bit lower on the Apple TV than on their other products: it could continue to evolve in its current role as a piece of plumbing that helps the ecosystem as a whole thrive.

And, of course, it’s entirely possible that we’ll have to wait until 2015 for the new Apple TV to materialize: this fall is clearly going to be a more-interesting-than-normal Apple product announcement season, but it seems like knowledgeable people are more confident that there’s going to be a big wearables announcement than a big Apple TV announcement? I certainly don’t know, and Apple is clearly capable of waiting until something is ready. I’m mostly just looking forward to replacing my phone, and I’m curious about Continuity (enabled, in my case by Family Sharing).

ascension: rise of vigil

August 23rd, 2014

Some of the Ascension sequels I’ve enjoyed as much as the original; sadly, Rise of Vigil was not one of them. The new mechanic this time is a third currency, called “energy”: unlike the other two currencies, though, this one doesn’t go away until the end of the turn. Instead, many cards gain special effects if your energy level is above a certain threshold when you play or acquire them.

The energy-providing cards almost always come with card draw, so they don’t clog your deck. Also, the standard energy-providing card never shows up alone to purchase, instead other cards will randomly show up with one or more energy cards under them. So this means that, sometimes, you have to decide whether to buy a card that would otherwise be suboptimal in order to get more energy, and because of the card draw, the energy itself is always a good thing.

Which could be okay: it encouraged me to to buy cards I otherwise wouldn’t consider, and variation is always good. But the flip side is that I didn’t feel like I was building up a strategy in response to the cards available for purchase: instead, I was ending up with a random-ish hand in order to maximize energy. Or, to look at it another way: the limited card row meant that the game already had a mechanic encouraging me to mix it up; I didn’t find it helpful to have a second such mechanic.

Of course, I didn’t have to focus on energy, and indeed sometimes I didn’t. The thing is, though: some of the energy effects are crazy-powerful, so if you skip energy, you’re shutting out the possibility of the most powerful strategies, and those powerful effects and energy are both plentiful enough that you’ll probably lose in that situation. Energy effects can turn a cheap, bad card into a card that can acquire any hero for free, they can turn a powerful card that can defeat any monster into a card that can defeat all monsters.

There’s probably more balance than I’m giving the game credit for: I didn’t play it enough to get a super-solid feel of it. And part of that isn’t Rise of Vigil‘s fault: I’ve got other ways to spend small chunks of time. Still: not my thing.

monument valley

August 15th, 2014

Earlier this summer, I stopped my playthrough of BioShock because, frankly, I was getting angry at the game. I didn’t want to spend my time going through grandiose facilely unreflective morality plays: I wanted to play games that were more closely crafted. I’d seen screenshots of Monument Valley and heard good things about it: I was quite optimistic that I would feel a lot better playing through it than continuing through BioShock.

And I was right. I mean, I don’t know that I was right, because I didn’t run the experiment of continuing through BioShock, but I do know that spending a couple of hours on Monument Valley was a thoroughly enjoyable palate cleanser, just what I needed at the time. (And it would have been quite enjoyable even if my palate hadn’t needed cleansing.)

Not that I have much to say about Monument Valley: I partially blame my not mentioning it until now on my lack of energy this summer, but only partially. I think I have a post touching on puzzle games lurking in my head, and hopefully it will make it out soon, but it’s not out yet, and I suspect Monument Valley won’t be the most relevant game even to that post. So, I’ll just say: if you’ve seen screenshots from the game, you know whether or not you’ll like the visual aesthetics of the game, and I enjoyed the puzzles as well. More of this, please.

on “on scorched earth”

August 11th, 2014

Brendan’s recent post “on Scorched Earth” lamented that the Netrunner card Scorched Earth was “inelegant”. I can see where he’s coming from—I’m certainly not going to claim that Scorched Earth is a paragon of elegance—but I think he undersells the card. In particular, while I think his alternate proposals would all make for interesting cards, I think that Scorched Earth enriches Netrunner in a way that his proposed replacements wouldn’t.

There are two ways for the Corporation to win in Netrunner: by scoring seven agenda points, or by forcing the Runner to discard more cards than are in their hand. These aren’t parallel—the former is the primary way for the Corporation to win, and most of the interactions revolve around that mechanism—but they’re both important, because without the latter, the Runner could be a lot more careless. For example, the presence of Snare means that, if the Runner has fewer than three cards in hand then they should think twice before running on a server with an unadvanced card in it, or even running on HQ or R&D: if they hit a Snare, they’ll lose the game.

In go, there’s a concept called “honte”: this translates as “proper move”. When responding to a situation, you’ll have different ways to play, but frequently local pattern matching will mark one of them as proper. That doesn’t mean that that proper move is always the best move—sometimes the global situation suggest otherwise, and sometimes detailed reading of the local situation will reveal that the honte isn’t the best move even locally—but nine times out of ten, it’s the right thing to do. These proper moves sometimes look a little slow (especially for those of us who aren’t good enough at the game to appreciate the down sides of not playing the proper moves), but if you stick to them, you’ll generally end up with a solid position while your opponent’s risks mean their downfall.

In Netrunner (and indeed in most other games!), this concept of proper moves also appears. It’s more likely to appear in a negative sense in Netrunner than in go: as discussed above, Snare means that it’s generally not proper to make a run with fewer than three cards in your hand, for example, and the threat of tags (which Snare can also produce, it’s pretty vicious!) also means that in general it’s not proper to run on the last click of a turn, because otherwise you’ll be tagged during the Corporation turn. So it’s not so much that certain moves are proper as that certain moves are improper; it boils down to a similar effect, though.

 

So, to sum: card damage is one route to winning, but that’s not its main role in Netrunner. It’s mostly there as a mild risk task on the Runner’s actions (at least mild if the Runner doesn’t overweight loss aversion), and by playing proper moves, the Runner can almost always avoid losing for that reason. In particular, it’s almost impossible for the Corporation to create an active strategy to win via card damage.

Or rather, it would be almost impossible for the Corporation to do so without Scorched Earth. Because Scorched Earth is one of the few cards that lets the Corporation cause significant amounts of card damage during the Corporation’s turn. (In the core set, the only other such cards only cause the Runner to lose one card at a time; I haven’t exhaustively surveyed the expansions, but I think it was about a year before second such damaging card showed up, with Punitive Counterstrike.) So, without Scorched Earth, the Corporation would have no active way to win by card damage; and while I do think it’s better for the Corporation’s winning strategies to be focused on scoring agendas, I also think it would be a shame if there weren’t any active card damage routes to victory at all.

And it’s not like a Scorched Earth win is easy to pull off. You need two of them to flatline the Runner (assuming they keep their hand properly full), so even assuming that the Corporation has three of them in their deck, the Corporation will expect to have to get to make it through most of their deck to have a chance of a Scorched Earth win. And to pull it off, you need the Runner to be tagged; but almost all of the ways to get tagged take place on the Runner’s turn, giving the Runner chances to clear their tags. (Or they can try to be careful and avoid getting tagged at all.) There are ways for the Corporation to tag the runner during the Corporation turn (SEA Source, for example), and I’ve certainly won my fair share of games by playing SEA Source plus two Scorched Earths, but doing that is going to require the Corp to have noticeably more credits than the Runner (enough to make the SEA Source trace stick while having credits left over for the Scorched Earths), so the Runner can foil that plan by staying rich.

And, if that weren’t enough, there’s another way for the Runner to foil the plan: it’s not in the core set, but the very first Netrunner expansion introduced Plascrete Carapace. One Plascrete Carapace is enough to protect against a Scorched Earth, so once that card became available, the proper move for the Runner when deckbuilding was to include two Plascrete Carapaces in their deck (and they’re neutral cards, so anybody can do that): that’s enough to stack the odds significantly in the favor of the Runner in the Scorched Earth battle.

 

So, to sum: the Runner can foil Scorched Earth by keeping four cards in their hand at the end of their turn, by avoiding ending their turn tagged, and by keeping up with the Corp on economy; these are all proper moves anyways. If that’s not enough to make the Runner feel confident, then throw in a couple of Plascrete Carapaces. Also, odds are that it will take a while for the Corp to draw enough Scorched Earths plus tag generation to win that way even if the Runner doesn’t have Plascrete Carapaces, so this also encourages the Runner to keep up the pressure on the Corp, which makes for a more exciting game all around.

Or at least a mostly exciting game all around: in Netrunner as in go, you don’t always want to play the “right move”, putting Plascrete Carapaces in your deck just to protect against Scorched Earth is grating, and you can’t simultaneously put pressure on the Corp while stockpiling money and limiting your runs. There’s a flip side for the Corp, too: you always have to do some work to pull off Scorched Earth even if the Runner lets their guard down, because Weyland doesn’t have a tag generation (in fact, no identity other than NBN has a lot of tag generation), and the splash cost is so high that you’re going to use at least 8 and probably 12 of your 15 influence on Scorched Earth if you do go that route. So this means that very few Jinteki or HB Corp decks will include Scorched Earth at all, and even NBN and Weyland decks will frequently find it better to focus on something else.

And this is where things get interesting. Given those calculations, is it really worth it for the Runner to waste two deck slots on Plascrete Carapace? Or, going a step further: some Runners will build their deck so that, in the absence of Scorched Earth, nothing horrible will happen to them if they get tagged. So, while I said above that the “proper move” for the Runner is to avoid ending your turn tagged, you can also decide to play as the Runner in a way that embraces the possibilities of getting tagged, accumulating tags right and left. If you do that, you’re vulnerable to Scorched Earth (as well as other cards, e.g. the dreaded Psychographics / Project Beale combo), but the rewards can be huge:
it turns Account Siphon from a card that (at its best) takes three clicks to get an 11-credit Corp/Runner swing into a card that (at its best) takes one click to get a 15-credit Corp/Runner swing, which is enormous. Scorched Earth probably has a larger effect on the game in the way in which it puts a real bite into tag calculations than in the possibilities that it opens up for an affirmative strategy to win the game; and yes, I would call that an elegant design choice.

 

Don’t get me wrong: I don’t feel like I’ve been clever if I win the game as NBN by using Scorched Earths. (Though if we’re looking at NBN inelegance, I’d cast my eye first at AstroScript instead.) But I’m glad it’s there to open up my possibility space as a Corporation when deckbuilding, and to strike fear in the heart of the Runner when they don’t know if I have Scorched Earths in my deck. (Or when they’ve caught sight of a Scorched Earth in my hand and are suddenly stepping much more gingerly, wondering when the inevitable second one will appear.)

And, on the Runner side, I need to embrace probability: make judicious bets, figure out which risks are the right ones to take given my current state of the game and what I’ve seen about my opponent’s hand. If they’re NBN, I’ll try to figure out if they’re focusing on Scorched Earth to complement their tagging; if they’re Weyland, I’ll be very nervous about any glimpse of tag creation that I see in their hand.

And then, every once in a while, I’ll build a Jinteki deck with a single Scorched and just enough tag generation to get it stick right after the Runner has stumbled into some net damage. If that combo lands, it will cause the Runner to say many things about the situation, but I suspect that “inelegant” won’t be the first adjective that they’ll use.

energy and pain

July 31st, 2014

I haven’t written here much recently; and I haven’t been working on my recent programming project, either. That’s not a sign that there’s something else that’s grabbing me: I just haven’t had energy during the evenings for about two months now. So it’s been easiest just to read blog posts or watch TV or play games; I’m actually really glad I’ve been replaying the Phoenix Wright series, and I’m quite enjoying Miss Fisher’s Murder Mysteries as well, I just wish there were more evenings when I was doing something else.

For a lot of the time, it was simple tiredness. My allergies were out of control in the winter, but they got a lot better with the help of nasal rinsing. But then they came back; I’ve bumped up my Claritin intake again, and the allergies are mostly under under control now, but not great. My current theory as to what was going on there was that the bathroom remodel kicked up a lot of dust, so I was hoping that it would get better once the bathroom remodel was over. (The remodel has turned out great, incidentally!) I’m not convinced my allergies are completely better now, though, so maybe there’s something else going on.

 

Or rather: maybe there’s something else going on with the allergies: there’s certainly something else going on in general. A couple of times over the last year my back spent a couple of weeks hurting; that happened again this summer (starting just before the construction work), and it just didn’t going away. Eventually, I went to see a doctor and started physical therapy; the therapist gave me some stretches, and it’s actually been a week since my back has hurt.

Which would be good, but my symptoms have moved: my leg started hurting, with a range of symptoms, symptoms that are triggered by a different set of actions (sometimes including just sitting down for a minute) than my back problems. That leg pain first appeared Saturday a week and a half ago, and as my back was getting better, my leg was getting worse; I went to the doctor last Thursday, and then that evening I had problems sleeping for the first time (noticeable ones: I woke up at 2:30am and couldn’t get back to sleep), so I asked Liesl to drive me to the doctor the next morning as well, and just sitting in the car was super painful. I couldn’t see my regular doctor on Friday, and I wasn’t too impressed by the doctor I did see, but I got some stronger pain relievers.

The good news is that my symptoms haven’t gotten any worse since then, and I’m pretty sure they’ve gotten a little better. (Though I don’t know for sure if I could sleep through the night without the drugs; also, while sleeping on my back is much less painful than sleeping on my side, it is really not something I enjoy.) But they are still very noticeable, and noticeable in ways that honestly kind of worry me: for example, I wanted to drive to the grocery store a couple of nights ago to pick up some replacement tomatoes; I only made it to the end of the driveway before deciding that, no, driving a car still really was not a smart idea for me. And walking home the last couple of days, I’ve noticed that the toes on my right foot are curling a little bit (and in retrospect I actually think I’ve seen signs of that for months), so there’s some odd physical behavior going beyond just pain.

So right now I just don’t know what’s going on. It seems like I’ve got a pinched or inflamed nerve somewhere; presumably caused by the back issues, but I don’t know why these new symptoms have appeared, or what to make of the fact that they’re appearing as the back pain is disappearing. And having the symptoms mutate is getting tiring; and it’s also tiring trying to figure out how to arrange myself over the course of my day to deal with this. (How much standing, how much lying down, when sitting down works at all.)

 

Or, to come back to what I started with: how to avoid being in pain during the evening. It’s hard to write or program at my computer if I can’t sit down and feel like I’m relaxing at least a little bit. Working standing up is a possible way to deal with that; but if I’m spending much of the day at work working standing up, then I don’t really have the energy to do that at home. So it’s a lot easier to watch an episode of Miss Fisher and then lie down on a sofa reading blogs for a little while until it’s time to go to bed.

This isn’t awful or anything: more a reminder of my mortality. And I think the exercises my therapist are recommending really are helping: not only is my back not hurting, but my back feels like it’s moving well. But I hope that their suggestions plus the anti-inflammatories I’ve been taking for a week start having an effect on this pinched nerve soon.

brenda romero: jiro dreams of game design

July 13th, 2014

It’s months since GDC, and I’m still trying to unpack my feelings about Brenda Romero’s Jiro Dreams of Game Design talk. Or maybe not so much my feelings about it—it’s an excellent talk, no question—but my emotional reactions to it. Her talk confronts concepts that I care about (greatness, team structure, creation) in contexts that I care about (games, food), leaving me with immediate reactions to almost everything she said, but immediate reactions that were frequently in conflict, and with me quite sure that there’s a lot to think about beneath those immediate reactions.

I watched it again last night; I’m still not sure what I think, other than that I’m now glad I’ve seen it twice! But, trying to put together some thoughts:

Greatness

She talks a lot about wanting to be great, and about the effort necessary for that. And this is where a lot of my insecurities with respect to the talk come in. Because, of course, there’s a part of me that wants to be great: who doesn’t want to be great? In the abstract, after all, it sounds, well, great. But, when it comes down to it: I am not behaving in a way that has led or will lead to me being great at anything.

Don’t get me wrong: I am egotistical enough to believe that I’m pretty damn good at some things, and even that I maintain a fairly high level of standard (relative to an appropriate baseline) at a fairly wide range of things. For example, I’ve largely made my living as a programmer for the last decade, and I’m pretty sure that I’m a noticeably better programmer than most professional programmers.

But I’m equally sure that, in an important sense, I’m not a truly great programmer. There’s nothing wrong with this, and for that matter my bar for greatness in that field may well be abnormally high: but there are significant ways in which I don’t meet that bar.

And her talk pointed at a few reasons why that might be. One is that I’m not quite obsessed enough. She talks about thinking about games from when she wakes up to when she goes to sleep; I think about programming quite a bit, including at odd hours, but it’s not that same sort of all-dominating passion that she projects. Another is that I don’t put in the hours; that’s a related concept but not at all an identical one, I’ll come back to that below.

Also: I don’t feel creative enough. Now, I’m not sure if I think that’s actually necessary for greatness, and for that matter I’m not sure how much Romero thinks it’s necessary for greatness. But it feels to me (and this goes way back, it’s not just my most recent decade) that I’m abnormally good at quickly coming to grips with others’ ideas and using them in productive ways, but there’s a certain seed of novelty that I’m not particularly good at.

Or, to put that last paragraph another way: I can be a quite good craftsperson. And that’s important to me, and for that matter it’s important for greatness. I was about to write: but maybe something’s still missing there? Now that I type this out, though: being a great craftsperson isn’t a contradiction in terms, it’s just a quieter sort of greatness.

So, I guess, if I were going to be great, that’s the sort I would be! But I still would need more passion and to put in more hours.

Actually, rereading this section: I think there’s something wrong about my angle here. What’s important in this context isn’t people being great, it’s works being great. And Romero’s talk is about great works, not (or at least not just) great people. When she raises and rejects the Triad of Constraints, for examples, she does so in the context of producing a great work. Hmm.

Teams, Control, and Responsibility

As is obvious from the talk’s title, Romero brings in food metaphors, metaphors from chefs and kitchens. But Jiro isn’t the only chef she talks about; in particular, she talks about Gordon Ramsay several times, and this was the part of the talk that I had the strongest negative emotional reaction to. Some quotes from that portion: “He had to get all these people to do what he wanted them to do”; “They screw up and he’s the one who’s going to get blamed”; “Screw it up? People remember YOU”; “Control your team or your team controls you”; “My standards, my rules, my kitchen”. (Those last two are Romero quoting Ramsay, I believe the others are her description of what she saw.)

This is a mindset that I have zero interest in: I want nothing to do with command and control, and I want nothing to do with team structures consisting of one guiding light and other people whose job it is to implement that person’s directives. And there’s an undercurrent of fear mixed into that egotism that I think is unwarranted on both parts: I simply have no idea who the chef is in, I believe, any of my favorite restaurants. I do not, admittedly generally patronize restaurants that have been awarded Michelin stars, but I’ve been to one or two, and I don’t think that would make a difference in my awareness of the chef’s name unless the chef decided to engage in self-promotion. For games, it is more frequent (but by no means universal) that I can name the lead designer of my favorite games, but even so: my focus is on whether the game is good, the designer is an afterthought.

So no, people won’t remember you, they’ll remember your work. And not your (singular) work but your (plural) work: the work that the team that you are part of produced. As I belatedly said above: great works are what’s important, great people are a secondary concept.

And yes, great works will (usually!) have a strong, coherent vision at their core. And yes, having that vision come from one person is one way to get there. But what’s important is that the vision is shared and made real by the team; and, as a programmer an in my prior life as a mathematician, I have a lot of experience working with visions that feel stunningly real because they’re a fundamental part of how the world works, or how our shared conception of the world works. So we can all work together to understand what zeta functions really are, we can all work together to understand what simple design really is. And there are tools to let groups of people express and produce works of shared beauty, groups don’t have to invent that from scratch.

Romero does not, fortunately, spend all of her talk embracing the Ramsayan end of this spectrum: I don’t believe, for example, that she thinks that game designers should be dictating the details of how programmers write code to support the game’s vision. And, once I got past my revulsion at the command-and-control aspect of this message, there’s a part of her message that I liked rather more. For your team to produce something great, your team has to do great work, and that won’t happen if you don’t feel responsible for that to happen. In Romero’s narrative, the “you” is a single person in charge of the team, but she also talks about trusting and helping your coworkers to do great work; in my version, it’s everybody’s responsibility, but that most definitely does not mean devolving into greatness being nobody’s responsibility. Instead, we all need to work together to figure out what great work means, to do great work ourselves, and to help others to do great work.

Food, Games, and Software

Romero is a game designer, and she talks about chefs. I am neither; and, listening to the talk made me wonder if those two fields are related in a way that programming, or at least the sort of programming that I do, isn’t. Both of those fields are, in large part, about crafting experiences: in fact, she goes out of her way to talk about how the best restaurants (at least when looked at through a Michelin lens) spend time on the experience of dining there writ broadly, not just on the food. Everything is there because it has a reason to be there, everything is done with intent, with focus, with care and craft.

That last sentence is also characteristic of great programs. But it’s a characteristic that’s only visible from the point of view of somebody working on the program; writing a program that way has an effect on the experience of somebody using the program, but that effect is not direct.

Of course, programs have an experiential component as well, and this aspect of greatness makes sense in that context as well; and that leads to a form of greatness that is directly analogous to what Romero talks about in food and games. (Indeed, given that much of her work is on video games, she is talking in part exactly about this aspect of great software!) But, returning to the previous section on teams striving together for greatness: a cross-disciplinary team striving together for greatness is going to be focused on that experiential side of greatness instead of the external side of greatness, because that experiential side is something they can all perceive and affect.

As a programmer, which do I care about more? I care about them both, of course, and they’re related. By writing great software as measured through the internal lens, I can affect its external greatness in a couple of ways. One is that well-crafted software is, in an important sense, unobtrusive to the user: it responds quickly instead of making the user wait, it is consistent instead of imposing a cognitive load, it doesn’t crash or have bugs. And another is that well-crafted software is responsive to the needs of people who are designing that experience: as somebody like Romero is experimenting to try to tease out the core and then refine the details of a vision, great programmers can help by producing software that they can adapt as quickly as possible (or even provide hooks to let designers adapt it themselves) to actively help that process.

As I said above, though: I’m a craftsperson at heart, and so my focus is internal. But one of the aspects of agile that I’ve internalized well is the desire to write code in order to meet real user needs and desires, and to enable quick experimentation to discover how to best meet those desires. So I would prefer to be part of a company that wants to write great software to deliver a great experience, and if a company fell down too far on either measure of greatness, I wouldn’t join it. Having said that: my bar on what I’m willing to consider on the programmer craft side of things is quite a bit higher than my bar for the user experience side of things.

Obsession and Time

I don’t think I’m obsessed enough to produce really great work. Which isn’t to say that I can’t get pretty obsessed at times: over and over again, I’ll dive into some aspect of learning (frequently but not always software-related), read the most important books on the topic, dive into discussions on the topic, experiment on the topic, and repeat it until I feel I’ve internalized something at the core of that topic. But listening to Romero’s talk (this one and others): I’m not as obsessed with programming as she is with games. Also, my obsession quiets down when it gets to the stage when I feel like I understand what’s going on in some area: my compulsion is to build a world view, not to create. (And, in practice, being a craftsperson is where I end up in the middle.)

There’s another question here, though: totally aside from obsession, how many hours are you willing to put in? Her talk refers to crunch as a fact of life in the game industry; it’s not a fact of life, and I work to make it not part of mine. I’m honestly not sure to what extent my refusal conflicts with greatness: part of extreme programming is the claim that putting in more than about 40 hours a week is actually counterproductive over the medium term, because it dulls the brain and you start writing worse code. It’s clear that there’s a value of N where working more than N hours a week is counterproductive if your goal is greatness, and there are industrial studies suggesting that productivity maximizes out at around 40 hours a week.

And I mostly buy that cap of 40 hours, but not completely. For example, in Chapter 38 of The Cambridge Handbook of Expertise and Expert Performance we have the claim (in a section studying violin students) that

All groups of expert violinists were found to spend about the same amount of time (over 50 hours) per week on music-related activities. However, the best violinists were found to spend more time per week on activities that had been specifically designed to improve performance, which we call “deliberate practice.”

And a cap a little above 50 hours feels more right to me than a 40 hour cap. But in a context of trying to produce great work, it raises some caveats:

  1. That study is about learning, not about producing. Admittedly, any part of great work is going to involve learning even as part of the production of that work; in fact, maybe it’s impossible to do great work without learning all of the time. (Though the converse is certainly not true: novices are learning but not producing great work!) But still: that study is measuring something different.
  2. The part about deliberate practice is super important. To me, this dovetails fairly well with a striving for greatness: part of doing great work involves being deliberate in what it means for work to be great, and Romero discusses the importance of having your colleagues look over your work on multiple occasions in your talk, which dovetails well with the importance of having a coach in deliberate practice. Maybe we should take a lesson from etymology here: great work requires deliberate practice, where by “practice” we return to the meaning of “do” or “act”.
  3. If we go with 50 hours, then I’m not sure what the texture of those 50 hours is going to be, but I’m almost positive that it’s not going to be 10 consecutive hours a day, five days a week. (Or 8 hours a day 6 days a week, or what have you.) Certainly during the times when I was (quite effectively) trying to become an expert in a subject, it would pop up in my life much more broadly than that: for example, Liesl and I had a habit on vacations where we’d be going through rooms in a museum, I’d go a little faster so I’d get a few rooms ahead of her, and then I’d sit down on a bench and read more in one of the math books I was working through. And, actually, when I say I don’t put in the hours, maybe I’m underestimating that: I only put in 40 hours a week (in a standard 8 hour + lunch x 5 configuration) sitting at my job, but I think about my work quite a bit at home, and the very act of writing this blog post is another part of my deliberate practice at getting better at my work. The flip side, though, is: I am not trying to do great work during most of those 40 hours that I do spend at work. So I should probably focus on improving that last bit!
  4. Even if producing and sustaining expert performance is most likely to come from working 50 hours a week, it absolutely does not mean that working 50 hours a week is at all likely to produce expert performance. The vast, vast majority of time, working long hours just means shoveling more crap; I have no doubt that that’s what’s going on almost all of the time when companies ask employees to put in crunch time.

When I put this all together, to me it leads to two recommendations:

  1. First, focus on being deliberate about producing great work. Constantly ask yourself and others how your work could be better, how your processes could be better, what the goals are that you should be striving for in the first place.
  2. Second, listen to your energy level. Producing something even on the small scale that you’re proud of can be very energizing: at its best, doing great work can lead to a feedback loop where you have more energy to do more great work. But once you push yourself too hard, then your work starts to dull; pay close attention to that shift.

I think that second point is where Romero’s obsession gives her a big edge: thinking about games and working on games clearly energizes her. I make a different set of choices, ones that are probably more similar to Johanna Rothman’s.

The Triad of Constraints

When producing something, you want to do it quickly, cheaply, and well; the Triad of Constraints claims that you can pick two out of three at best. To which Romero’s answer is refreshing: fuck picking two, just pick one, make it great.

As she also acknowledges: this can work if you’re producing your own games on your own time; when you’re working as part of a business, telling the people who control the budget that you’re going to ignore speed and cost doesn’t work so well.

I’m not sure that that works so well for me personally as a programmer, though. My focus is on evolving software through as small steps as possible, with an external Product Owner prioritizing the customer-visible features. That means that, at any stage, I want to have written software that’s as good as I can have written in that amount of time, while preserving the ability to continue to do so in the future.

So I’ll alter the triad in the opposite direction, by picking all three. I’m very self-centered, so from my point of view, the cost is generally fixed, it’s my salary, I’m not going to magically produce twice as much work or twice as good work if you pay me more. And I certainly agree with Romero that I want to produce great work. And then the scope is what it is: you’ll get a different product if you ask for the best I can produce in a week than the best I can produce in a year, but in any case you can pick the scope however you want. Or, to put it another way: the Triad of Constraints implicitly assumes that you’re making choices up front instead of evolving; why would I want to do that?

Of course, I’m just punting certain decisions over to a Product Owner; Romero is more the Product Owner herself. That’s the way to approach the control aspects that I discussed above in a way where I’m less dubious: deciding on the sequence and details of user-facing features is an important role, no question.

Works and Creation

She has a comeback to my evolutionary design boosterism: she has no patience for the concept of the Minimal Viable Product, whereas to me it seems like an obviously good step in an evolutionary design.

But I’ve spent my entire professional career on software that is designed to be used and grow over the course of years, even decades. This is very different from a more traditional sort of creative work: where you release a work into the world, let people experience it as a whole, and move on to producing your next work.

And I’m not nearly as convinced about Minimal Viable Products or evolutionary design in the creative work arena. When I’m reading a book, I don’t want to start by reading a minimal version of that book one month, then reading a slightly more fleshed out version a couple of months later, then reading a third version that retreats in some areas based on user feedback and moves in a different direction: I just want to pick up a book and read it. And the same goes for games, much of the time, though admittedly less universally these days.

This doesn’t mean that evolutionary design doesn’t work in a context of polished creative works: you can still produce them iteratively, you can still solicit feedback from a trusted close circle at frequent intervals. And, as she says: “what if I made something as good as I possibly could every frigging day?” That’s one of the lessons she learned from Jiro: he ships every night.

So we’re returning to what I said above: work in small steps without sacrificing quality. I combine this with handing scope decisions off to a third party; she is in charge of scope, and she works in an industry where the scope that you choose for a product when it is released externally is a crucial decision.

Conclusions

Or at least next steps: I like evolution, after all!

One is that I should work harder to be doing my best during the times when I am working on something. If I’m spending the time on something, why not spend the time being focused and doing the best work I can? If I’m not going to do that, it’s probably better to not spend that time: instead, spend the time in a way that lets me get my energy back so that I can focus later.

And the other is that I should seek out greatness more. I’ve worked with one person whom I consider unquestionably great; or at least I worked in a startup that he cofounded, we rarely interacted at all. But, even so: those few interactions were incredibly energizing. (I was talking about those interactions with a friend of mine a couple of months ago, she said she’d rarely heard me sound so excited.) I should try to find more of that; I should try to deserve being around more of that.

returning to bioshock

June 14th, 2014

After my unpleasant experience with System Shock 2, I moved on to BioShock. I wasn’t worried that I might have the same problems with BioShock that I had with System Shock 2: I remembered from my prior experience that BioShock took the Easy difficulty setting seriously (enough so that I was thinking of trying Normal on the replay), and the RPG aspects were dialed down and didn’t allow for the same sort of missteps I’d made in System Shock 2.

As it turns out, though: I stopped playing BioShock after the Medical Pavilion level. Not because the game was too hard (I made it through okay on Normal, certainly more easily than I did with System Shock 2 on Easy), but because of narrative reasons.

 

Which is a pity, because there were two aspects of the game that were flat-out amazing, one grand and one a little more localized. The grand aspect was the setting itself: the idea of an underwater city, the execution of the architecture (both in its original and ruined aspects), the music and sound design, etc. And the localized aspect was the idea of a cubist plastic surgeon: that’s a wonderful concept to build a level around.

I would have loved a game that went all in on those aspects. Given those two elements, probably the most natural way to flesh them out would be as a slowly paced horror game: one with enough breathing room to let you drink in the environment, but that still lets Dr. Steinman and subsequent characters show through in their glory. And, of course, the actual game does contain horror aspects; but there’s just too much shooting of guns or plasmids, too much hacking of turrets and health stations, too many vita chambers for the horror game to have any conviction. Basically: there’s a part of BioShock that wants to be an RPG with class choices, that wants to be Deus Ex, and that part wins over the proto horror game.

Or, indeed, over any other potential realization of the game that would leave you more room to drink in the mood and setting. If only games would learn from Shadow of the Colossus that it really is okay to leave space…

 

Still: that alone wouldn’t have been enough for me to stop my playthrough. What really got to me is the treatment of the Little Sisters and the Big Daddies. I said more about this in my first playthrough of the game, but: the entire treatment of the Little Sisters is awful. When you meet a small child that you’ve never seen before, the two choices that should go through your mind should not be “should I kill this child or should I use this magical shiny thing I’ve been given to perform surgery on the child despite the her screams of protest?” Now, admittedly, this sort of iffiness isn’t without precedent in video games: it’s also the case that, if you happen to find yourself in a strange location and come across a gun, then you should not use that as justification for mowing down everybody you meet! But at least that choice has history normalizing it in a video game context, and at least you’re being attacked so you can reasonably consider yourself to be in a “kill or be killed” situation. Whereas with the Little Sisters, the game forces you to commit child abuse, and then has the gall to present one form of that child abuse as the “good” choice.

That’s bad enough, but it then follows it up with a Big Daddy encounter. And here, the situation gets, if anything, even worse. Again, people: if you’re in an unfamiliar, dangerous location, if you meet a small child wandering around, and if you meet an adult whom that child clearly knows and loves and who is protecting that child (and doing so remarkably capably, given the extreme danger of the environment), then the correct choice of action is not to kill that adult. The correct choice of action is almost certainly to treat it as none of your fucking business; if, instead, you decide to treat this as some sort of clever environmental puzzle encouraging you to figure out how to use the many tools at your disposal to dispatch the protector most efficiently, then you are a monster.

 

So no, I really wasn’t in the mood to go further with BioShock after the end of the Medical Pavilion. I’m willing to consider the idea of playing games where I’m a monster, though honestly I would generally far rather not. I’ve got a lot of respect for what I’ve heard about Far Cry 2 or about Spec Ops: The Line; but those games put you in a much more self-consciously morally complex situation than my reading of BioShock does, and they don’t have the player being actively complicit in child abuse as their main theme. Having said that, the Little Sisters aren’t even the main overarching plot aspect of BioShock; maybe those other plot themes are reason enough to go forward?

I didn’t go forward, so I can’t say for sure, I’m just basing the following on my memory of my first playthrough. But my memory says this: the overarching theme basically comes down to two things. One is a poisonous presentation of father/son dynamics: arguments about whether the father gets to tell the son what to do, or whether the son gets to do whatever he wants, killing the father in the process. And the second is, of course, Objectivism.

And, well, fuck that too. Both of these basically boil down to the same thing: man-children who are fighting among themselves about who gets to have their own way, with the rest of the world as collateral damage. And that fits in with the whole Little Sisters / Big Daddy treatment, too: women and children are subhuman pawns for those man-children to use and dispose of as they wish, and men who try to build relations and families are slightly more worthy of respect (they’re men, after all, and if they’re successful in a role of protector then at least they’re participating in the fight) but ultimately need to be destroyed.

If this were satire, it could be a depressingly biting portrait of certain aspects of society. (Including, I suspect, the AAA game industry; I’ll throw Silicon Valley startup culture into the ring, too.) But it sure doesn’t read that way to me: the game isn’t a pro-Objectivism presentation by any means, but the game structurally buys into enough of Objectivism’s conceptual prerequisites that, well, see above.

 

So: no more BioShock for me. I’m curious about Minerva’s Den, but not curious enough to dip into BioShock 2. (And I’m very glad that people involved in that game have moved in a different direction.) Everything that I’ve read about BioShock Infinite makes me think that that game would drive me crazy as well: a glorious environment combined with way too much shooting and an offensive and hamfisted treatment of narrative themes.

Instead, I went through Monument Valley as a truly lovely palate cleanser, and then started a replay of the Phoenix Wright games. And that was absolutely the right choice.

medium: browserify

June 10th, 2014

There’s one problem with the way I first set up my build system for Medium: I had no control over how the CoffeeScript files were ordered. In languages with linkers, this isn’t a big deal: within a library, the linker will resolve all the references between object files at once. But without a linker, ordering becomes more of an issue.

Actually, in CoffeeScript or JavaScript, it’s not that much of an issue: in fact, for small projects you can get away with ignoring it entirely. It’s fine for methods in one class to refer to another class that hasn’t been loaded at the time the first class is defined: as long as the second class exists by the time the class has run, you’ll be okay. So that means that the only real issue when starting off is making sure your entry point gets run after everything else is loading; that’s a one-off case that’s easy to deal with manually. (You can just inline the entry point code in the HTML file, for example.)

 

Having said that, just clobbering everything together like that felt a little distasteful to me; and there also turned out to be two practical issues. The first is that Mocha, the unit test framework I used (which I promise I’ll talk about soon!), didn’t use the browser model of sticking everything in global variables: it used the Node.js concept of modules. I actually spent a couple of weeks ignoring that mismatch, writing code that worked in both realms by checking to see if the Node.js variables were defined, but in retrospect, that was silly: the point of this blog post is that doing things the right way is easier than that workaround.

And the second practical issue is inheritance: if class A inherits from class B, then the browser really does need to have seen the definition for class B before the definition of class A. To get that right, I needed a dependency structure; and doing that by hand would have crossed the line from silly to actively perverse. So I looked around, and found that browserify (in its coffeeify incarnation) was what I wanted.

 

First, a brief introduction to the Node module system. When you define what looks like a global variable in a Node source file, it doesn’t actually get stuck in the global namespace: the namespace for that file is local to that file. But Node provides a special exports variable: if you want to export values, attach them to that. For example, if I have a file runner_state.coffee that defines a RunnerState class, I’ll end the file with

exports.RunnerState = RunnerState

That last line still doesn’t stick RunnerState in the global namespace: there’s actually a special global object you can use for that, but you generally don’t want to do that. Instead, if another file wants to refer to that RunnerState variable, it puts a line like this at the top:

{RunnerState} = require('./runner_state.coffee')

The return value of the require() call is the exports object for that file, and I’m using CoffeeScript structured assignment to get at its RunnerState member. Once I’ve done that, I can refer to RunnerState elsewhere in that file. (Incidentally, in some situations you don’t need either the leading ./ or the trailing .coffee in the argument to require(), but I found that using both worked best with the collection of tools I was using.)

 

So, that’s the Node.js module system: a nice way to avoid polluting the global namespace and to express your object graph. It comes for free in the Node ecosystem, and all I wanted was to bring that over to a browser context. And that’s where browserify comes in: it lets you write code like it’s Node modules and then it transforms it into a format that the browser is happy with.

To cut to the chase, here’s how to get it to work. Start with the build system from last time. Then install browserify and coffeeify, plus the grunt plugin:

npm install --save-dev browserify coffeeify grunt-browserify

In your Gruntfile.coffee, replace the grunt-contrib-coffee requirement with a grunt-browserify requirement, and replace the coffee block with a block that looks like this:

    browserify:
      dist:
        files:
          'js/medium.js': ['coffee/*.coffee']
        options:
          transform: ['coffeeify']

Also, in your default task, you’ll want to invoke browserify instead of coffee.

 

Here’s the resulting file:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

    browserify:
      dist:
        files:
            'js/medium.js': ['coffee/*.coffee']
          options:
            transform: ['coffeeify']

    sass:
      dist:
        files:
          'css/medium.css': 'scss/medium.scss'

    watch:
      coffee:
        files: 'coffee/*.coffee'
        tasks: ['coffee']
        options:
          spawn: false

      sass:
        files: 'scss/*.scss'
        tasks: ['sass']
        options:
          spawn: false
  }

  grunt.loadNpmTasks('grunt-browserify')
  grunt.loadNpmTasks('grunt-contrib-sass')
  grunt.loadNpmTasks('grunt-contrib-watch')

  grunt.registerTask('default', ['browserify', 'sass'])

Now, if you run grunt, you’ll build the output JavaScript file (js/medium.js in this case) like before, but with separate input files treated as separate modules! Which, of course, means that it won’t actually work until you go back through them and add require() and exports in appropriate places.

medium: setting up a build system

May 31st, 2014

After I set up Medium, the next thing I did was start writing code and unit tests. And I will write about unit tests in a couple of posts, but I want to jump ahead one stage, to a build system, because that was something that required workarounds almost from the beginning and turns out to be easy to set up if you know how.

Because, of course, if you’re using CoffeeScript and SCSS, you need a preprocessing stage to turn them into something that a browser is happy with. If you have a single CoffeeScript source file, then running the coffee command is not too crazy, but what if you have multiple source files? You don’t want to run coffee on each of them individually, and you don’t want to have to load each of the outputs individually into your HTML file (or at least I don’t!). The coffee command actually has a --join argument to handle this, so you can certainly work around this manually, but this is definitely getting to the stage where a C programmer would say “I would have written a short Makefile by now”.

 

In JavaScript land, though, you probably don’t want to use Make; there are various options for build tools, and the one I chose (which seems to be the most common?) is Grunt. To get started with it, you actually want to install the grunt-cli package globally instead of putting it in your package.json file:

npm install -g grunt-cli

This makes the grunt command available, but the smarts are all in the grunt package plus whatever plugins you use. Those you install via npm install --save-dev; a good place to start is

npm install --save-dev grunt grunt-contrib-coffee grunt-contrib-sass

Grunt’s configuration file isn’t in some custom language, it uses an internal JavaScript DSL for configuration. And you can configure it in CoffeeScript, too, which is of course what I did. So here’s a basic Gruntfile.coffee:

module.exports = (grunt) ->
  grunt.initConfig {
    pkg: grunt.file.readJSON('package.json')

    coffee:
      compile:
        files:
          'js/medium.js': 'coffee/*.coffee'
        options:
          join: true

    sass:
      dist:
        files:
          'css/medium.css': 'scss/medium.scss'
  }

  grunt.loadNpmTasks('grunt-contrib-coffee')
  grunt.loadNpmTasks('grunt-contrib-sass')

  grunt.registerTask('default', ['coffee', 'sass'])

Pretty self-explanatory. (I have a bunch of CoffeeScript source files but only one SCSS file; eventually I may have multiple SCSS files, but even then I should be able to use includes to get a single entry point.) And, with that in place, I just type grunt and it builds medium.js and medium.css.

Of course, it does raise the question of how all those CoffeeScript files get combined into a single JavaScript file and what to do if you want to have control over that combining; I’ll explain that in my next post. But for now, this works as long as there aren’t load-time dependencies between your CoffeeScript files, and it outputs a single JavaScript file to load from your HTML.

 

I actually prefer not to have to manually type grunt each time I want to rebuild: I like to have Grunt watch for changes and build things every time I save. To get this to work, install the grunt-contrib-watch package and add a block like this to the initConfig section of Gruntfile.coffee:

    watch:
      coffee:
        files: 'coffee/*.coffee'
        tasks: ['coffee']
        options:
          spawn: false

      sass:
        files: 'scss/*.scss'
        tasks: ['sass']
        options:
          spawn: false

Also, make sure to add grunt-contrib-watch in the loadNpmTasks section. If you do this, then you can type grunt watch in one of your shell windows and it will rebuild whenever the appropriate files change. And yeah, it’s a bit unfortunate that you have to specify the file globs twice, but only a bit; if that really bothers you, I guess save those file globs in variables? (We are, after all, writing in a real programming language here.)

 

There’s one further change that

medium: setting things up

May 29th, 2014

As I said recently, I’m experimenting with writing a Netrunner implementation in JavaScript. I’m calling it Medium; here’s the first in a series of posts about issues I’ve encountered along the way.

Before I go too far, I want to thank two sources of information. The first is Bill Lazar; he’s one of my coworkers, and he’s given me lots of useful advice. (And I suspect still more advice that will be useful once the project gets more complicated.) The second is James Shore: just as I was thinking about starting this, he published a list of JavaScript tool and module recommendations that seems very solid.

Anyways: before starting, I’d made a couple of technology decisions, and they were actually to not quite use JavaScript and CSS: both are solid technologies to build on, but both have annoying warts that I don’t think are worth spending time to deal with. So, in both cases, I’m using languages that are thin wrappers around them: instead of JavaScript, I’m using CoffeeScript, so I don’t have to worry about building my own class system or explicitly saving this in a local variable when I’m passing a function around. And instead of CSS, I’m using Sass (or, specifically, SCSS): when writing CSS, you find yourself repeating certain values over and over again, so having a macro layer on top of CSS can really improve your code. Neither of these languages means that you don’t have to understand the language that underpins them, or means that you need to have to learn extra concepts beyond what the base language provides: they just automate some common tasks.

(Incidentally, once my CSS gets more complicated, I’ll probably start using Compass as well. I haven’t yet felt a strong need for that yet, and it’s possible that what I’m doing is simple enough that I won’t actually need Compass, but it seems like the next step once I start feeling that even Sass is too repetitive for me.)

 

This meant that I needed to install those tools. I won’t go into the details of installing Sass: basically, you need Ruby + RubyGems, both of which I already had lying around, and both of which are entirely tangential to this series. (If you’re on a Mac and aren’t already a Ruby developer, then probably sudo gem install sass will do the trick.)

CoffeeScript, though, requires Node.js and npm, both of which I was going to need anyways and neither of which I had detailed experience with, so I’ll talk about them a bit more. On my Mac, I used Homebrew for both of those (if you install Node with Homebrew then npm comes along automatically); on my Linux server, I used the Ubuntu-packaged version of Node, and I installed npm following the standard instructions.

I initially did a global install of the coffee-script npm module. But you really want to control that sort of thing on a per-project level, so you can specify what version of a module you want: and npm lets you control that via a package.json file. There are lots of options that you can put in that file, and I imagine I’ll start using a lot more of them once I use npm to actually package up Medium, but for dependency management you can ignore almost all of the options. So here’s a sample package.json file if you just want to use it for dependency management:

{
  "name": "medium",
  "version": "0.0.0",
  "devDependencies": {
    "coffee-script": "^1.7.1"
  }
}

Try putting that in a package.json file in an empty directory and then typing npm install. You’ll see that it installs coffee-script along with a package mkdirp that coffee-script depends on, and it puts them in a new subdirectory node_modules.

You can look at the docs for the version numbering if you want, but basically: ^1.7.1 means that it’s known to work with version 1.7.1, and later versions are probably okay. This is totally fine while I’m working on something for development; for a serious deployment, I’d probably want to pin things down more tightly, including specifying versions of packages pulled in indirectly.

One nice trick: say that you have new package that you want to start using. Then don’t bother looking up the version number and manually adding it to package.json: instead just do

npm install --save-dev NAME-OF-PACKAGE

That will look up the current version of that package, install it, and add an appropriate line to your package.json file. So that way you can start using the latest and greatest version of your package and get it working, and you’ve saved the information of what that version that worked for you was.

On which note: you of course want to check package.json into version control. For now, I’m putting node_modules in my .gitignore file; if I get to a situation where I’m serious about deployment, then I’ll want to have a way to get access to node_modules without depending on external sources for that, but even in that situation, storing it in the same git repository as the source code is the wrong approach (because of repository bloat). For a personal project just for fun, ignoring node_modules is totally acceptable.

 

So with that in place, I can compile CoffeeScript files by invoking node_modules/coffee-script/bin/coffee. Which is what I did initially, but I got a more formal build system in place fairly soon, I’ll talk about that next.

men, women, programming, culture

May 25th, 2014

So, a couple of weeks ago, a prominent programmer / writer wrote a post whose driving metaphor was: frameworks are bad because it’s like one woman having many men sexually subservient to her, whereas the way things should be is for one man to have many women sexually subservient to him. People complained, he apologized and rewrote it without the metaphor in question.

Last week, another prominent programmer / writer tweeted a picture of some custom artwork he’d commissioned. That artwork showed silhouettes of a woman posing in a sexualized fashion, holding guns as if they were fashion accessories, with those silhouettes serving as shooting range targets. The artist has produced quite a lot of works on that theme, it turns out; his statement says “We are, all of us, Targets in one way or another.”

 

After this last weekend: some of us are a hell of a lot more targets than others of us. As the artist says, “None of us are exempt from exposure to these fixed cultural elements of our existence, or the means by which they attempt to impose their will upon us”, but that imposition takes radically different forms in different circumstances. He says that “[I] ask my audience to interpret each piece for themselves so as not to be hindered or influenced by my intentions”; the interpretation that I’m coming to right now is that men’s conception of gender roles in this society is super fucked up; that manifests itself in many ways, along a continuum of severity; and that I don’t see the software development community as a whole to be particularly at the innocuous end of that continuum.

Another prominent programmer / writer tweeted: “Seems to me we (again) review ideas for political correctness before considering the ideas themselves. I’m not sure that’s good.” Which raises the question: good for what? If your sole objective is to try to become as good a programmer as possible, then focusing exclusively on ideas and ignoring metaphor, subtext, social context may be a good strategy. I’ve frequently been in that situation myself, and I’ve learned quite a lot about programming from all of the programmers mentioned here. (Though if their books had been full of harem metaphors, I’m not nearly as confident that that would have been the case.)

Becoming a better programmer isn’t my only objective these days. There are a lot of problems in this world, a lot of directions along which to try to improve; programming ability is one of those directions, and I still have a huge amount to learn in my struggle to become a better programmer, but there are a lot of other issues that I struggle with, that I have a huge amount to learn about as well. And I think some of those other issues might even be a bit more important.

netrunner implementation experiments

May 22nd, 2014

GDC got me in the mood to do some game-related programming; and, when that mood didn’t go away after a couple of weeks, I started to spend some time thinking about what exactly that would mean. I’d thought initially that maybe I’d learn how to use Unity, trying to implement one or two game-related tech experiments I had in mind. But a lot of my game playing these days is in the form of board or card games, and some of those ideas were starting to pull at me a bit more; Unity’s 2D support has apparently gotten significantly better recently, but when I looked at some of their 2D demos, it was still intended for physics-based games, which isn’t so relevant for most aspects of board games.

And, thinking about it a bit more: I can probably just do a card game or a board game in HTML / CSS / JavaScript. (Not even pulling in the canvas stuff: I’m perfectly happy to represent a card as a div.) Which has huge advantages in terms of experimentation: I can work on it wherever, people can run it wherever, and it’s a super-easy way for me to get going.

It does mean that I won’t learn Unity, which is too bad. But the flip side is that I can use this project to catch up to speed with a lot of other technologies: it’s been over three years since I’ve seriously programmed in JavaScript, and that code base was out of date and badly-written even at the time. So this could be an excuse to learn about CSS3, to learn about more of the JavaScript ecosystem (which is continuing to grow like crazy).

Also, while I’ll start out with an implementation just in the browser, I’ll want to add a server-side component fairly soon on. And I can do that in JavaScript, too: if I use Node.js then I can move my business logic code from client to server side, or use the same code in both places as appropriate. (Thinking about that will also give me a good excuse to separate business logic from presentation, which is always a plus.) I’ve never used Node but it’s certainly in the list of technologies that I’m interested in.

And there’s a subtext of this that isn’t game-related: I imagine I’ll be at my current job for another year or so, but at some point I’m going to want to move on, so it’s not a bad idea right now to start thinking about ways to increase my options for a potential move. And brushing up on modern web technologies and learning about Node fit that bill quite well: I’ve worked as a backend developer in most of my jobs, but my guess is that I’d be happier in a group with more fluid roles, which means that brushing up my frontend skills wouldn’t be a bad idea, and I can also certainly imagine working professionally with Node in the future. Also, just building a full project from scratch is always educational.

 

So: the plan is to write a board game or card game using non-canvas JavaScript in the browser, with Node as an eventual backend. But that leaves out one very important aspect of this: figuring out what the game will actually be. If I had lots of card game ideas written down, I’d probably pick one of them; as is, though, I don’t, and I suspect that I’ll spend enough time playing with technologies, at least initially, that I won’t want to spend a lot of time on game design ideas.

So that, in turn, suggests reimplementing a game somebody else has written as an exercise. Yes, I’m quite aware of the problems around cloning, but that’s not an argument against doing something as a private experiment. (Think of this like an art student making copies of works in a museum.) And, when I phrase the question that way, an obvious candidate comes to mind: Netrunner. The game’s rules are more than complex enough to teach me a lot about the tradeoffs in the domain implementation side, it raises a lot of interesting questions about interaction models, and the only current electronic implementation that I’m aware of is one that I won’t be tempted to copy the details of. So it seems like a good place to start; I’m pretty sure that, once I’ve gotten a basic implementation of the game working (one identity on each side from core set cards, say), I’ll have learned a lot and will be able to take that learning in a lot of different directions.

What I’m not at all sure is how long this will take: it depends on how much time I carve out for it, it depends on how much I need to learn, and of course the Netrunner rules have a lot of special cases, even in the core set. I wouldn’t even be blogging about it at all right now, except that I’ve already learned a lot from the experiment: I’ve probably missed four or five good blog posts by not blogging about it from the start. I’ll try to recreate some of those, but still, not the same.

Netrunner initial placement experiments

For reference, here’s where I was earlier today (along with a corresponding view from the Corp side); I’ve been thinking about installation models and how to fit stuff on a not-excessively large screen. (Yay CSS transforms for resizing and for rotating Corp ice!) Once I get a little farther with installs, I guess I’ll try working on basic runs; that’ll be interesting…

And if anybody is designing a card or board game that you’d like a browser-based version of, let me know: hopefully in a few months I’ll have come to a reasonable stopping place on this experiment and I’ll be interested in using these technologies for something else.

system shock 2

May 14th, 2014

I’m planning to play through all the games in both of the Shock series this year; I had a quite good time replaying System Shock, but I’d never played System Shock 2, which seems to get talked about rather more. (E.g. I’ve seen comments claiming that BioShock is in many ways an inferior remake of System Shock 2.) So I was really looking forward to playing it; of course, I didn’t expect it to be as smooth an experience as BioShock, given its age, but I did fine with System Shock, which is even older.

As it turns out, I most emphatically did not do fine with System Shock 2. Not that I regret having given it a try, but I’m glad I gave up after going through the first two levels: it simply wasn’t working for me. Which is too bad, because it meant that I didn’t get to really experience the SS2 version of Shodan, or the lure of The Many, but trying to finish it would have driven me crazy.

 

I didn’t realize quite how much of a kitchen sink game System Shock 2 is: it’s got significantly more going on than either its predecessor or successor. There’s a skill tree that’s initially presented as a class system but where you quickly learn that you can cross classes; there’s a psi system; weapons degrade; inventory turns out to be even more pressured than its predecessor but with a (hidden to me until I stumbled across it in a FAQ, though maybe I missed something) way to expand it slightly by leveling up; there’s this chemical thing for unlocking buffs; and probably more variables that I missed completely. And all of that is on top of its predecessor’s FPS-combined-with-role-playing-inventory gameplay and its story told through environment, audio logs, and orders through loudspeakers. (With hallucinations added into the mix this time!)

So way too much stuff to be a focused game. Which is fine: I wouldn’t want all games to be that way, but I’m all for art that turns an ungainly collection of concepts into something unexpectedly magnificent. The thing is, though, I need to be able to actually play it without driving myself crazy.

 

I started off on easy (as I do in games like this), and I selected the psi path. I figured I’d be able to freeze enemies with the power of my mind, and I’d be able to whack them to death with a lead pipe. And, indeed, the lead pipe was there, as expected; what wasn’t expected was that the lead pipe was much less effective than in either System Shock or BioShock. That might not be a big deal, since I could freeze my enemies, except that freezing enemies took up psi power which didn’t autorenew and whose ammo is a more limited resource than ammo for standard weapons. And, when I was encountering enemies at the start, I couldn’t (if I’m remembering correctly) even fire standard weapons, because I would have needed to spend some experience at the start to level that up, and I’d spent the experience on other stuff.

So, basically, it felt like I was being set up for failure right from the beginning by making what seemed to me (what still seems to me in the abstract) to have been a perfectly plausible set of choices in my initial powers. Maybe I’m missing something there; certainly if I were better at playing FPSes on PCs then I would be better at dancing around enemies. (Though I get the feeling that the controls in this game are a lot clunkier than in normal FPSes; I missed when swinging with the pipe a lot more than I’m used to.)

Having said that: this being a Shock game, dying wasn’t actually so bad. There were vita chambers to revive you, or saving and loading was fast enough, too. So I was optimistic that I’d start enjoying it more as I made it through the first deck: I leveled up so I could shoot guns, and it really wasn’t that annoying by the end. I wasn’t actually enjoying it too much, and I was actively offput by having to shoot squeaking monkeys, but still: serviceable enough, I felt like I was starting to get control of the game a bit and get past my loss aversion.

 

And then the next level started off by putting me in a radiation area: no getting comfortable here, and not just being uncomfortable because of narrative and general spookiness, uncomfortable instead because I’m going to feel like I’m always about to die even if I’m playing at easy. But it wasn’t too long before I unlocked the next vita-chamber, so I could relax again.

Except I couldn’t. One big difference from its predecessor is that System Shock 2 splits each deck into multiple sections, and vita chambers in one section don’t work in another. So I ended up having to go through a part with a new, significantly tougher robot enemy and where I couldn’t freely respawn. This meant that, instead of a grind of running through levels, killing some stuff, dying, and getting revived, and making a bit more progress (though not as much as I would like because some enemies respawned as well), I instead was reloading save games all the time and looking on nervously as what seemed like a very generous number of health packs disappeared surprisingly quickly.

I made it through that deck, started the next one, and decided that I just didn’t want to deal with the game any more. So I stopped.

 

Not what I wanted out of a game. There’s probably interesting narrative there, but it wasn’t letting me get to it. There’s probably interesting systems there, too, but that wasn’t what I was in the mood for, and the game wasn’t structured in a way to let me play with those systems. (Our May VGHVI Symposium was FTL: I died all the time in that one, too, but that game was set up to let me learn the systems by running another experiment every hour, so I never had the frustration of feeling that my initial build had set me up for failure, or of wanting to reload because otherwise I wasn’t sure if I’d get to the next bit of narrative.)

On to BioShock next. Maybe I’ll try that one on normal instead of easy: there is something that I would enjoy in the systems of these games, and that game showed that it understood what I was asking for when I did play in easy, so maybe it would also be more understanding if I express willingness to grapple with those systems? We’ll see…

blank screen starting octgn in wine

May 4th, 2014

I set up OCTGN on Wine on a new computer in preparation for this week’s VGHVI session; I was following these helpful instructions, which have worked for me in the past.

Unfortunately, I ran into a weird problem: OCTGN would start with its normal “Loading OCTGN” screen, but then instead of showing me the normal game window when that was done, it would show me a black rectangle.

I tried it out on the other machine when I’d previously had OCTGN installed that way, and I got pretty much the same symptoms. Though on that machine, it took longer, since it spent some time updating OCTGN, and there was a popup that briefly showed up that gave me a clue as to what was going on.

So, the short version: if this happens to you, edit the OCTGN settings (probably in ~/OCTGN/Config/settings.json) to set the property IgnoreSSLCertificates to true. Here’s a line to add:

"IgnoreSSLCertificates": true,

(or, if you put it at the bottom, then put the comma at the end of the previous line instead of that line).

Once I did that, OCTGN came up as expected; I haven’t actually tried playing a game, but I’m assuming that that works. (Though I brought my VirtualBox Windows installation up to date just in case…) But I figured I might as well write a post about it, in case it helps anybody else googling for solutions to that problem.