[ Content | Sidebar ]

best practices

December 14th, 2017

A quote from Anil Dash’s article about Fog Creek’s new project management tool, Manuscript:

Be opinionated: Manuscript has a strong point of view about what makes for good software, building in best practices like assigning each case to a single person. That means you spend less time in meetings where people are pointing fingers at each other.

Here is my opinion: if you want to talk about opinionated software (which, by the way, is a concept that I do agree with, even when I disagree with the specific opinions), then own it. Don’t start covering your ass in group legitimacy (before that sentence has even ended!) by saying that your opinions are actually a “best practice”.

Dash does, at least, try to explain why people would feel that individual assignment is a best practice, an opinion worth having. But geeze, that explanation: we’ll avoid finger pointing by making sure that each case has the name of the person you should point a finger at? How does that work exactly?

Don’t get me wrong: he’s got a coherent point of view. As far as I can tell, he believes in primarily optimizing for individual developer productivity. And yeah, if I preferred to work that way, I’d want to assign tasks to individuals, too. But say that, don’t talk in the abstract about best practices.

 

Though, looking at Manuscript’s feature list, I see no evidence at all that it’s actually opinionated software, so probably the “best practice” empty phrasing is closer to the truth. Take their section on Scrum:

Construct and plan iterations from existing cases. Create milestones for your backlog, sprints, doing and done — or any other structure your team or project needs. Customize each project’s structure to match your ideal Agile or Scrum workflow.

Followed soon by this wishy-washiness:

Estimates can be added to cases from the planning screen, and if you prefer story points, then you can switch between hours and story points too.

And there’s a Kanban section too. So: use Scrum, use Kanban, use your own homegrown process, use hour estimates, use point estimates. Do anything you want, we’ll be happy to support it! (At least as long as what you want to do doesn’t include pair programming, I guess.)

Dash’s article quotes a tweet with the following lament about Jira:

Why are you customizable to a fault, except in the ways I want you to be?

But isn’t that exactly what the above bits from the Manuscript feature list are promising as well?

 

Ah well; not a big surprise to be disappointed by marketing for enterprise task management software…

layton’s mystery journey and whackamon

December 11th, 2017

Layton’s Mystery Journey was actively disappointing. Following Layton’s daughter was a nice enough change of pace, I suppose, and the series is a good fit for the iPad; but the game didn’t have soul, and the puzzles weren’t enough for me.

For example, you start off by meeting a dog who tries to hire you to figure out why he can talk: but then another, more urgent case comes along, you do that instead, and the question of how the dog can speak never comes up again. And you have some boy who follows you around making puppy-dog eyes: I guess it’s still an improvement on the gender politics of the earlier games in the series, but only barely. The main other person whom you regularly interact with has their personality filled out in very broad, stereotypical strokes; all the other characters have one distinguishing feature and zero depth.

The puzzles are fine, but nothing at all new compared to other games in the series. The visual art isn’t awful, but it isn’t good: the dog always seems like he’s floating off the ground, and characters wave their arms or recoil in shock in ham-fisted ways. And breaking the game up into lots of different cases with only a vague hint at an overall story isn’t particularly effective plot-wise, and makes it harder for you to get to really like the city. So I think I’m done with the series unless something changes.

 

The other game I played recently on my iPad is WhackaMon. Which I started on the laptop, but it involves fast clicking on different areas of the screen, and doing that with touch is a lot easier. This game the only reason why I’ve ever logged into Facebook Messenger, and I’m certainly not going to continue to do so now that I’m done with the game, but if it’s the only way to play an Eyezmaze game, then I’ll put up with that.

Unfortunately, WhackaMon isn’t one of my favorite Eyezmaze games: too much clicking, not enough thinking, and not quite charming enough. Though there is some thinking involved in the clicking, and there is some charm in the standard Eyezmaze building up of a more and more settled area; it’s too bad that there’s not more thought involved in the building, though. And Facebook Messenger actively gets in your way: I accept some amount of being asked to spam your friends, but being asked to do so immediately after building a new structure is not only probably too often, it actively gets in the way of your enjoying that new structure.

Having said that, I’m glad I played it: I spent a pleasant enough three or so hours tapping on stuff and figuring out systems. And I’m certainly glad that Eyezmaze is continuing to make new games.

post-systematic flexibility

December 10th, 2017

David Chapman has, among other things, been writing about modes of approaching meaning, in a way that’s informed by Robert Kegan’s developmental psychology. He’s written a summary of this recently on one of his blogs, and he discusses it frequently on Meaningness (see e.g. this post and posts it links to), but I thought he had a particularly good discussion of it recently on the Imperfect Buddha podcast. (You can skip to about 22 minutes in if you want to skip over the discussion of the state of Buddhism in the west.)

He focuses on stages 3, 4, and 5 of Kegan’s model. Stage 3 is characterized by a focus on communal values, individual relationships, emotions, and experiences. Stage 4 is systematic: it accomodates complexity in a rigid way, by mapping it to a model. Stage 5 is meta-systematic: if you’re in stage 5, you’re skilled with dealing with interface between systems and reality, and can handle use that to handle vagueness while embracing precision and complexity.

 

I’m trying to come to grips with whether or not I think this is a helpful model. (And, if so, in what contexts it’s helpful, or how that help manifests.) For now, I’m having a hard time thinking about it in terms of an individual’s development as a whole, but it seems to me like a plausible match to how somebody thinks about specific aspects of their life?

For example, I’m a software developer who has spent some amount of time thinking about and experimenting with agile software development. So it feels to me like I can tell the difference between stage 3 and stage 4 use of agile: stage 3 agile is saying / believing that you do agile because that’s what cultural forces present as normal behavior, while if you’re asked what you do, you have some idea that agile = scrum and it means that you have standup meetings once a day, call each two weeks on the calendar a sprint, and store a backlog in Jira. (And a stage 3 agilist will do all of that while happily continuing to have separate requirements, design, implementation, test, and maintenance phases, and while constantly generating estimates and plans that are far more ambitious than what they actually get done in a sprint.)

Whereas a stage 4 practitioner will say that the phrase “we do agile” doesn’t make sense, because agile isn’t a methodology, it’s too vague for that. But they’ll have a precise idea of what it means to follow, say, Scrum or XP, and they’ll be skilled in following that precise model and helping teams follow that model.

Which, in that light, means that I’m probably not a fully stage 4 practitioner, because I’ve never been on a team that followed Scrum or XP as a whole, or that had a well-considered homegrown system that it actually stuck to. (Which doesn’t mean that I’m in stage 3, either, because I’m generally quite aware when teams aren’t following methodologies, either external ones or ones that they’ve written down for themselves.) But, if you go down from full methodologies to smaller practices, like test-driven development or refactoring, I can make a better case that I’m a pretty solid stage 4 practitioner.

And if we move outside of software development, I can tell a similar story: e.g. I’m quite sure that my Tai Chi teacher has an excellent systemic understanding of Tai Chi (and hence I also believe that it makes sense to talk about a systemic understanding of Tai Chi), I’m equally sure that I don’t, but I also feel like I’m learning relatively concrete facts and improving in ways that I can point to? So I’m consciously trying to start the journey towards a stage 4 understanding of Tai Chi, I just haven’t gotten very far.

 

Stage 5 is more of a mystery to me. One of the points of stage 5 is that systems are only models, and hence are always flawed. But the issue there is that there are multiple ways that you can get to a rejection of systems: you can take a stage 3 approach of not really thinking about them seriously; you can take a nihilistic approach (Chapman calls this “stage 4.5” and is pretty worried about it) of correctly understanding that systems are always imperfect models and using that as a reason to reject them; or you can take a stage 5 approach of appreciating the nuances of the boundaries between systems and reality. Which should mean that you can use the power of systems in contexts where they apply well, you can avoid them in contexts where they don’t apply well (or, potentially, switching to a different system that applies better there), and you can tell when you’re near the boundary, using the system to inform your actions but not to rule them, and potentially using your observations to update the system as well.

At least I think that’s what stage 5 means: but it also feels to me like my understanding of all this stuff is probably basically at a stage 3 level? Chapman sounds sensible when he talks about this, it feels to me like he’s getting significant value out of it and believes that it’s tied pretty well to other forms of thought that he finds valuable, but I can’t say that I’ve seriously tried to put the framework to use. So, ultimately, I’m mostly just parroting / cargo culting what he says, which (I think) is stage 3 behavior?

 

One feeling that I’ve had over the last few years: more and more, when making programming decisions (broad design decisions, narrow decisions about what to type now, decisions about how to segment my work while trying to go from my current state towards a desired future state), my mind is starting to associate weight to those decisions. And here, by “weight”, I mean that my mind literally associates certain decisions with something that feels heavier or more solid, whereas other decisions feel like more of a haze. Hmm, I guess weight alone isn’t actually all that’s going on in my internal perceptual apparatus: e.g. there are some that feel like pebbles, solid and reliable but also like small steps, some that feel like mist, where I don’t perceive any weight but I also don’t understand what’s going on, and some that feel like they’re crumbly terrain, actively and concretely dangerous to proceed along. So maybe it’s more of a combination of weight and texture?

If I wanted to try to tie that into this Chapman / Keegan model, maybe that’s saying something about the boundary between stage 4 and 5? The areas where I have these feelings are situations where I don’t just know how to follow a given system, I have a pretty good idea of what the specific consequences are of doing so or not doing so (or doing so in different ways). So that means that I’m getting a better appreciation of reality pushing back (the “interface between systems and reality” that I mentioned above): when a certain question is answered well within a given system, when I’m pretty sure a given system is accurately warning about something, when I’m on the edge of a system, when I’m pretty sure I should work within a different system, and when I just don’t know?

 

Hard for me to say: like I said, I don’t understand the theory very well. And, for all I know, I’d get as much from linking my understanding to any other random list, e.g. The Five Levels of Taijiquan. (Different numbers in that book’s levels, though!) And, don’t get me wrong, there are certainly areas where I’m firmly in stage 3: e.g. when reading Twitter I’m just as likely to react to events in a way that ultimately comes down to group membership as anybody else is. But it is nice to start to have a deeper sense of what substantial expertise might feel like…

paperclips

December 3rd, 2017

I guess I played Paperclips enough that I should write about it here? Or rather I spent enough time watching it in my browser, or I spent enough time being distracted by it, or something.

Paperclips isn’t the first cookie clicker I’ve played, but it’s the one I’ve played the most; I think it’s the only one I’ve made it to the end state of, and certainly the only one I’ve replayed. And the narrative, as slight as it was, was actually a rather good fit to the mechanics.

Mechanics-wise: it’s all about bare numbers, and the game helps you think about them by exposing derived information (rates, in particular). And there’s enough complexity that it’s not obvious what the optimal strategy is at any given point: you basically know what to do, but you have a couple of directions you can go when optimizing, and also you don’t know when the next deflation event (or, more rarely, cataclysmic change, e.g. a new currency introduction) is that will invalidate all of your current calculations. And, if I’d want to think about it more, there would have been more that I could have dug into: e.g. part way through the game you start picking a competitor in a robot prisoner’s dilemma tournament, and I haven’t figured out (either theoretically or empirically) which strategy is the best.

 

That’s the game play; but, ultimately, much of the time you’re just sitting and waiting for stuff to happen. (Maybe buying more production capacity every once in a while, but not in a way that makes a real difference.) And, most of the time, the game is even happy to play itself on autopilot: continuing to make more of the relevant currencies without needing explicit action.

So you could imagine having it run in a background browser window while you, say, write a blog post or something, checking in once every half an hour. I found that very difficult to do, however: there’s always something just around the corner, some slight reward for spending three minutes watching numbers go up and then clicking as soon as possible.

 

There is, fortunately, an end state to the game. Which gives you two options; one is to start over, with slight tweaks to the numbers; the other is to contemplate the void. I picked the first option the first three times I finished the game, and I have no complaints about having done so; after that, though, I picked the other option, and I’m glad it existed.

genre insecurities

November 28th, 2017

If you were to ask me for, say, a list of my top five favorite movies, I don’t know exactly what the full list would look like, but most of the time both Spirited Away and Pom Poko would be on there. Which, it turns out, I have somewhat mixed feelings about: even admitting that I don’t have a particularly thorough movie background, is a pair of fantasy anime movies that could reasonably also be labeled as children’s movies a place where I (a 46-year-old man) want to put my stake in the ground? Shouldn’t I prefer movies that are more thoroughly grounded in a range of life experiences?

The above, of course, isn’t any sort of case against holding those movies in very high esteem: as phrased there, it’s completely unsupported genre snobbishness. And I wouldn’t put up with that sort of snobbishness in any other art form: I grew up in a context that, say, valued literary fiction over science fiction or romance, that valued classical music over pop music, that valued a whole load of things over video games (to the extent that video games even existed while I was growing up), and I’m pretty confident in saying that those blanket valuations are ridiculous, that literary fiction and classical music are just different genres. I can still see the effects of that context in my psyche, but I can also consciously set it aside. (And, don’t get me wrong, it’s not like anybody told me not to read science fiction while I was growing up or to not listen to pop music when I went through that phase in high school. And, also, don’t get me wrong: if I were to make a similar list for music, classical music probably would be extremely well represented.)

 

Setting both anti- and pro-genre snobbishness aside, though: you can learn from any genre, so I’m sure I’ve got gaps in my taste that arise from my genre choices: I did actually read a fair amount of literary fiction in grad school, and it was productively different from what I’d been in the habit of reading. And there are also stereotypes that I see in some of my habitual genres that I’m actively unimpressed with: e.g. the “anointed savior of the world” trope I see in so many games and also in comics (both American and Japanese, both in print and animated forms).

Worry about that latter stereotype is probably what’s really going on in my psyche here: I do enjoy wish fulfillment, but I think it’s healthier for me personally if I don’t spend too much time diving into it. Instead, I’d prefer to have a healthy balance of art that focuses on the small scale, on the details of what exists, and on actual people.

Having said that, too much of a focus on small scale personal concerns can be associated with its own negative stereotypes that I’m equally dubious of: e.g. literary fiction about middle-aged men unhappy with their marriages and instead finding a match with women in their twenties. I don’t have any more respect for that sort of wish fulfillment than I do for RPG “savior of the world” wish fulfillment; but if we can step away from that to something that feels more like real interactions between real people (and, yes, with real problems), then that’s important.

But at any rate you can of course focus on details and on people in any genre. Returning to science fiction, Trouble on Triton puts you in the head of somebody so you can how he interacts with other people, what he wants from those interactions, the pain that he gets from that, the pain that others get from that, and the self- and outwardly-inflicted nature of the problems surrounding him; and the novel’s nature as science fiction lets it generalize those experiences in a way that clarifies by the distance of the setting.

 

I said above that I’d prefer to have a healthy balance of art that focuses on the small scale, on details, and on actual people; that’s true, but only half true. My relationships with my wife and daughter are both extremely important to me; and if art can shed light on that, that’s great. And work involves people too, of course; and I do care about my friends.

But, granting all of that: I’m not a people person. Also, a lot of the classic literary themes actually aren’t particularly reflective of my life: happy, stable marriages and careers aren’t in general the subject matter of great novels. (Not that our family doesn’t go through rough patches – this last year in particular has been quite a bit rougher for us than I’d like – but still.)

Instead, a lot of what interests me is trying to figure out systems: figuring out what code and computers are telling me, solving puzzles of one form or another in my spare time. Which doesn’t mean that I don’t like small scales and details, because as I get older I find more and more that listening to details is an excellent path into broader concepts. But still: figuring stuff out gets me going, and that’s going to inform my artistic choices. Not necessarily in a direct way, I don’t particularly want to read books featuring programmers, but in a metaphorical way, I want to read books where reading them feels like uncovering and making sense of a conceptual space that’s new to me.

 

I led off by bringing up Spirited Away and Pom Poko; this focus on systems and details is easier to see in Pom Poko, because it’s a message movie, in multiple ways. It’s about growth and the negative affects growth has on the environment, on animal life in the environment in particular. It’s about the process of change, focusing more on the loss that change entails but still allowing you to see the benefits. So there are conceptual spaces to explore here, and to test your understanding of via exploration of tradeoffs.

And Pom Poko certainly focuses on the details, and on people. (I mean, mostly on tanukis, but still.) How individuals react to change in different ways; how life continues in its pattens despite change. It does this without grandiosity and without catastrophizing at a broad level: ultimately, the tanukis lose their battle, but most of them survive and adapt nonetheless. Though many of them don’t survive: the movie doesn’t catastrophize, but it doesn’t pull its punches.

Spirited Away isn’t the same sort of message movie: it’s about a very capable girl who turns out to be friends with a river god. So, to some extent, it’s a bit by the numbers; but I do appreciate how its plot asks fundamental questions about what the concept of family means. Family as people you’re related to by birth, but also family as people who choose to care about each other.

 

Looking at the two together, though, clearly movies that draw on Japanese mythology press my buttons, at least if they do so with a focus on sprits and nature. Which I think is another example of what I was talking about above: enjoying the process of exploring a conceptual space that’s relatively new to me, just in a less abstract way than the intellectual themes I talked about earlier.

Of course, movies aren’t just vehicles for plot and themes: they’re something you see and hear. And both of these movies have bits that are visual spectacles: the entire bath house in Spirited Away has, as its job, to put on a show, and the parade in Pom Poko is really something. And, aurally: Joe Hisaishi is one of my favorite film composers, and Itsumo Nando Demo from Spirited Away is one of my favorite pieces of his.

 

So yeah, they’re good movies. I probably should branch out more (though, don’t get me wrong, I don’t spend anything like a majority of my movie time watching anime), but there’s something there. And there’s certainly nothing wrong with enjoying exploring lovingly crafted spaces…

her story

November 6th, 2017

(Spoilers for Her Story follow; if for some reason you just want to know my opinion and are thinking of buying it, I’m very glad I played it, so if you’re on the fence, give it a try.)

I am very glad to have played Her Story shortly after playing Tacoma: both games tell stories that feel a lot more familiar outside of games than inside of games, both use interactive techniques to good effect when telling their respective stories, but the interactive techniques and the subsequent effect on how I experience the stories are significantly different.

Tacoma feels like a copiously annotated story. That story unfolds over the course of three days, which you learn about by seeing six key points during those three days; and, during those three points, you can look at the story from a few different perspectives, and are presented with some specific pieces of textual information informing each of those points and perspectives. And there’s subsidiary back story available: extra scenes you can watch about each character, and physical spaces for the ship and the characters that you can inspect, some with further textual information.

Her Story also makes it clear that there’s a linear story going on, but instead of progressing through that story linearly, the game almost immediately allows you to navigate on your own. I’m not even sure what a good metaphor is for the experience: a crystal, with views from different facets? A palimpset, reconstructing a text? Or maybe the best metaphor isn’t actually a metaphor at all, just a description of what’s going on: you’re conducting a murder mystery, trying to piece together what happened from the clues that you come across (that you notice!) and from the unreliable subjects you’re interviewing.

 

In Tacoma, you could say that the game mechanics focused on perspective, reifying that concept in a changeable viewpoint on a three-dimensional (or, really, four-dimensional) space. In Her Story, in contrast, the travel occurs along a one-dimensional space; and that, in turn, means that the navigation alone is less interesting from a game point of view. So the game has you navigate via conceptual controls instead of thumbsticks, reifying those concepts in the form of search terms that allow you to dip into portions of the timeline in an unpredictable fashion.

Or at least it seems unpredictable from the outside; one of Her Story’s most impressive accomplishments is how it uses what seems like an unpredictable method for controlling how you navigate the timeline and nonetheless ends up with a story development that’s satisfying in a surprisingly traditional way. Because, when reading a novel (a mystery novel, perhaps), I start out getting a picture for the basics of the setting and the problem that it’s presenting; then I start understanding the possible solution space, and thinking about how it might unfold, and what surprises might be in store; then I come across some twists that lead to new levels of depth and predictions; and eventually it all comes together. And, somehow, I went through that same experience while playing Her Story, despite the player’s behavior being aleatory from the designer’s point of view.

 

(Here’s where the spoilers begin in earnest, for people who want to stay away.)

 

Concretely: I started out just trying to get a feel for the situation, assuming that I was trying to piece together the events that led to the murder. I searched words that seemed important in the initial interview segments, leaning a bit towards proper nouns.

I’m not sure exactly when I realized that there were two different women appearing in the interviews: I must have heard Eve speak a few times before I realized that she existed. I think it might have been when I heard the name of the midwife, searched on that name, and then heard the whole story about their birth? But at any rate I transitioned quite gracefully into a second act of the game, which mostly centered around learning how the two sisters grew up, but also (from a gameplay point of view) had me asking questions like which sister was speaking during which days.

At some point I happened across a clip where there was a guitar sitting on the table, with no explanation whatsoever. So then I had to search for the term “guitar”, which led me to the first part of the song, and then I quickly found the second part of the song. If I’m remembering correctly, this was the transition into the third act of the game for me, trying to understand the sisters’ points of tension with each other better, and also trying to figure out what happened with Hannah’s parents.

And then I learned about what had happened between Eve and Simon; and eventually about Simon’s death; by then I’d seen the vast majority of the clips, so after a bit more searching of random words I’d jotted down, I declared victory.

 

In other words: I experienced a very satisfying unfolding of the story, broken down into four coherent acts, with significant parts of the story remaining hidden for quite some time, only appearing once I had the context to appreciate them. And yet all of this came out of a game with a random access interface, driven by search terms!

I still don’t know how the game did that, and how much I got lucky. I imagine quite a lot of it isn’t luck: presumably there are key words that don’t occur in the initial clips? I’d certainly be interested in seeing a graph whose vertices are the clips and whose edges are words shared between clips; does that turn up clusters that are dramatically meaningful?

But of course it’s not just a graph theory puzzle, for a few reasons. If you search a popular term, you don’t see all the clips; so we’d have to reflect that in the graph. (And of course restricting the clips you see in that situation by time order means that, all things being equal, you get more Hannah and less Eve.) And people don’t search words at random: I’m sure I’m not the only person who gravitated towards names and other proper nouns at the start, and in general people are going to search for words that seem meaningful. Finally, people aren’t restricted to searching for terms they’d heard: e.g. I searched for “guitar” not because I’d heard the word spoken but because I saw one.

So, somehow the game manages to balance all those considerations and still help the plot unfold. And I think it does that without cheating; it does say something about one volume being corrupted, but it said that at the start of the game and still says that at the end of the game, so I don’t think the game has been been hiding anything from me, or at any rate that it hid anything that hasn’t remained hidden?

 

I say “I” above when talking about my experience with the game, but I wasn’t playing it alone: I was at the keyboard but I was displaying it on the TV. Liesl watched a fair amount, and Miranda seemed basically just as involved as I was: at a lot of key moments I was following Miranda’s suggestions for what to type.

The game worked very well in that mode: we could talk about what we thought was going on, Liesl and Miranda both noticed things that I didn’t (e.g. I think Liesl was the first person to notice the tattoo), and the words I searched were mostly words that had been spoken whereas the words Miranda suggested were mostly thematically appropriate ones that may or may not have been spoken recently. So, between the three of us, we jumped around more and saw more stuff; yay for games that support that sort of shared experience.

 

Her Story of the most interesting games I’ve played this year. I won’t say that I want to play a whole bunch of games using this mechanic, but maybe actually I do? Certainly it’s a reminder to not stay stuck in a rut; and it feels like there’s some sort of deep lesson in the game about how to guide players’ experiences without prescribing.

twitter 2017

October 26th, 2017

A couple of weeks ago, a #WomenBoycottTwitter hashtag showed up on my timeline. It appeared on a Thursday, encouraging people to stay off of Twitter the next day; I haven’t been feeling great about my Twitter usage all year, so I figured I’d use that as an excuse to take a day off and see what it felt like. And I did indeed succeed in staying off of Twitter that day: my reflexes had me still launching Tweetbot every once in a while, but I always exited immediately. So y’all didn’t get to hear the play-by-play of me somehow managing to lose my right AirPod in the grass in a tiny nearby park; and when I checked on the past day of Twitter the next morning, my feed was significantly blacker than normal, with some pretty reasonable critiques of #WomenBoycottTwitter. (Though those critiques left me in the clear: everybody agreed that it would be fine to have fewer white guys on Twitter.)

The boycott was really just an excuse for me rather than a well-thought-out moral conviction: like I said, I haven’t been feeling great about my Twitter usage all year, because it’s been eating into my life more than I’m comfortable. Mostly, of course, it’s because of the shit show that our current president is (and that our current congress is): it is not unusual to have a week go by where, every single day, even multiple times a day, there’s a breaking news story that would be the biggest political news story for a month in normal times. So I have this horrible combination of needing to feel caught up with the extraordinarily fast pace of news while knowing that whatever I learn about will make me feel worse.

The news cycle is the main reason why Twitter feels different in 2017 than in previous years, but it also feels like there’s been a volume increase. Part of that is related to the news cycle: there are various issues that are important this year in ways that they weren’t important to me in years past, so I’m following people who are experts in, say, health care or international relations. But also the Twitter essay has exploded this year, which means that there are interesting people who are posting a lot. Right now I’m only following 244 people on Twitter, which is the lowest my following count has been in years, but it sure doesn’t feel like my timeline is bare.

 

Also (and this one isn’t new to 2017): Twitter is a pretty nasty place. I mean, it’s not nasty for me personally, but it’s a vehicle for serious harassment, in ways that very much directly affect people’s lives, and a lot of that happens in directions that reinforces existing inequalities instead of being random. So: is Twitter a space where I want to spend my time?

Partly, Twitter is just reinforcing existing dominance patterns: I don’t have an option to spend my time in a world where, say, white supremacy or patriarchy isn’t a dominant force. But social media platforms make their own choices about how they want to react to this, what actions they want to take in response; Twitter’s choices have (tautologically) led to it being the sort of space it is. I’m sure this is a hard problem to solve without throwing away the (real and significant!) benefits that Twitter brings, but still, I’m not sure that it’s a place where I want to spend my time.

 

So: how to respond to all of this? I can wish that we had a different president, but wishing won’t get me very far. I can wish that people wrote blog posts instead of Twitter essays; again, wishing won’t get me very far. And I can wish that Twitter were less of a harassment shit show; not much I can do about that, either.

Ultimately, I have to figure out how and where I want to spend my time; and, once I’ve made that decision, figure out what changes in habits I need to establish to lead to my desired outcomes.

 

The easy answer is to say that I should give up on Twitter entirely. And that’s definitely in the potential solution space, but it’s not obvious to me that it’s the right choice. I really do have friends that I interact with via Twitter; I’m not entirely sure what I would lose by stopping those interactions, but I’m pretty sure I would lose something. (And, incidentally: there is very little chance that I will switch over to Facebook as a primary posting vehicle, that one isn’t in the solution space.)

And I really do learn things from people I follow on Twitter, too: over and over, Twitter has helped me learn about programming, about politics, about ways of thinking that are important to me. Having said that, if I switched time from, say, reading Twitter to reading books, I would also be learning something, so Twitter is potentially a loss from a learning point of view as well as a gain, but you can certainly make a case that some amount of Twitter usage is a net positive for my learning.

I don’t, however, see any real benefit in the need to keep up to date with politics on an hour-by-hour level. I don’t want to be disconnected from the horrors that are going on in our government, but learning about those horrors at 3pm versus 6pm versus a day later probably brings no concrete benefit. (And I can imagine stretching out the time scale further: unless I’m going to join a protest tomorrow or call my Senator or something, then being a week behind seems okay? Though there is some benefit in being aware of the magnitude of those horrors, and catching up with the daily helps with that.)

 

When I analyze the situation that way, it seems pretty clear that, at the very least, I’m checking in on Twitter too frequently. And it’s still possible that leaving Twitter entirely would be best for me; that, however, is less clear. So I should start an experiment with significantly less frequent Twitter usage: see if I can validate the hypothesis that that will improve my life, and see if I can get more information about whether quitting Twitter entirely would be a net positive or a net negative.

Of course, it’s easy to say that I’ll check in on Twitter less, but it’s harder to actually do it. (That’s actually one advantage to the idea that I should leave Twitter: deactivating my account would be an easy way to enforce that.) I think probably the best step for me is to have a goal to not check Twitter on my phone: that way, I won’t check on it while at work, while commuting, while walking Widget, which carves out large amounts of Twitter-free spaces. (I already don’t check it on my computer: so, the goal would be to only check it on my iPad.)

The downside of that, of course, is that, when I’m home, I have much better things to do than to check Twitter! So it would probably be better for me to, say, only check Twitter while commuting instead of only checking it at home. But that would require willpower to enforce, which is hard; whereas not checking it on my phone just requires deleting Tweetbot. (I can even leave the main Twitter client installed so that I can still post: I won’t be tempted to use it to actually read Twitter, because I’m a “complete timeline” sort of person.) And, hopefully, if I’m only checking Twitter a few times a day, it still won’t use up too much of my time, because I can read through hundreds of tweets fairly quickly (and I can throw stuff off to Instapaper if I see potential rabbit holes to go down): I think the issue is more the interruptions rather than the total quantity of time?

I guess the other option would be to leave Tweetbot installed and just move it off of my dock, down to some hidden folder. That would probably work too, because it would be enough to break the habit of checking it frequently? But I think I’ll start by deleting and seeing what the effects are.

 

So: Tweetbot is deleted from my phone, and Music has taken its place in my dock. (Messages, Safari, and Castro v. 1 are the other apps there, if you’re curious.) Which, symbolically, feels right: really, wouldn’t my life be better if I were spending more time listening to music and less time reading Twitter?

tacoma

October 15th, 2017

What impressed me most about Tacoma was how normal it felt, and how surprising in turn that normalcy was to me. The game is full of AR recordings showing you silhouettes of the crew members of the station that you’re investigating; and a couple of those silhouettes were noticeably pear-shaped. Which, when I first saw them, surprised me; but then that immediately raised the question: why am I surprised? None of the silhouettes were particularly abnormal compared to people that I’d encounter in day-to-day life; and actually those silhouettes are probably more representative of my day-to-day life than the body types that I normally see in video games!

(Of course, the answer is obvious: video games generally aren’t interested in presenting day-to-day life. They instead want to present a stereotypically idealize life, and for female characters in particular, that puts pretty drastic limitations on what body types are acceptable.)

And once I got past the surface: Tacoma paints a picture of a life that’s surprisingly normal on a day-to-day level, too. The crew isn’t a band of intrepid heroes on a mission to save the galaxy: they’re a bunch of workers (contractors, even!) who are trying to get by. Making a living doing a job that they seem to basically enjoy, but where they’re also clearly not the ones in power; but people who have a lot of other stuff going on in their lives beyond their jobs, some good, some bad, all mundanely personal.

Again, totally normal in day-to-day life; and actually also normal in other artistic media. If this plot were in a book, I wouldn’t blink an eye; games, though, generally stay far away from that sort of mundane slice-of-life approach.

 

Tacoma does have a bit of a bite in how it depicts that slice of life, though. The game takes place in the future, which means that it needs to extrapolate; and the extrapolation is clearly interested in the struggle between corporations and workers. The workers are contractors, but with long-term, repeaedly renewed contracts (which is already depressingly familiar in tech circles, though the IRS did start cracking down on that a few years back). And the mention of, for example, “Amazon University” suggests that the spread of corporate control across society has increased, and payment in terms of “loyalty points” shows that scrip has returned. (But hey, those loyalty points are probably more valuable than most stock options! :rimshot:)

Fortunately, one trend that apparently has reversed compared to the present-day United States is unionization: the union is there to at least try to fight for the workers. Again, something unusual: this is admittedly largely a sign of the sector that I work in, union jobs definitely still exist in the country, but I don’t hear unions talked about day-to-day much at all, and the percentage of workers covered by a union has declined dramatically.

 

So: Tacoma is telling a story that’s unusual for the medium in terms of how normal it is, and that focuses on labor issue in a way that’s unusual both for the medium and for the trend of the times. (Or at least for the trend of the last two or three decades; in the last couple of years, discussion of labor issues has actually gotten quite a bit more frequent.)

And it’s doing this as a video game, within the walking simulator genre. I’m not an expert in that genre by any means, but I like what Tacoma is doing with it. The replayable AR scenarios give you something to focus on, and to observe from multiple angles, as you follow different characters through the same scene.

These AR recordings provide a better solution to the NPC problem than I’ve seen in other walking simulators. You don’t have to pick up context exclusively from the environment; they don’t feel like movies, because you can control the position that you’re observing them from, and can fast-forwarding and rewinding as you please; and the screens that are available at various portions give them an extra texture. Also, having six characters to follow is a nice balance between letting you feel like you’re understanding a community rather than seeing one person’s story while avoiding spreading your attention too thinly: it’s definitely the case that each person’s story matters, but they also matter as a group. Not that I think this is necessarily a better or worse approach than, say, Edith Finch, but the AR recordings are a solution that works well and that is new to me.

The core plot is linear: sections of the station unlock in phases, and in each phase, the AR recordings are, conveniently, closer to real time. I can imagine a different game using the same approach of AR recordings but presenting them as a crystal, where they all were giving different lenses on the same point in time, with later recordings giving new insights that encouraged you to re-watch earlier ones. That’s not the choice Tacoma took; I’m a little curious to see if Her Story (which, conveniently, is going to be next month’s VGHVI game) will feel different in that regard.

 

The AR recordings aren’t the only plot/information/setting delivery device, though. In each recording, you get access to personal communications; each crew member has their own workspace with a desk that gives you more information about what they’re experiencing; and each crew member also has their own personal space. And then there’s the station as a whole, in particular with the common rooms as well.

Which is a nice balance of information delivery devices: significantly richer (and, to me, more pleasant) than the combo of audio logs plus textual infodumps than I’m used to. Also, there are a lot of objects to pick up; that turned out to be interesting because of how mundane the vast majority of them were.

Mundanity might sound bad, but it turns out that the quantity of mundane objects meant that the game got me thoroughly out of the adventure game mindset of “you must pick up every single object”. And the objects certainly weren’t all mundane: instead, they fit into a spectrum, with juice boxes and what not at one end, progressing to objects that mattered to somebody (jewelry, art works) but didn’t necessarily have a clear, explicit link to other parts of the exposition, then to letters and such giving a more direct bit of insight into what a person was thinking, and a few objects (keys, keypad codes) that are there strictly for gameplay purposes. So the result was that you could walk through the environment feeling like an (extremely nosy!) observer intsead of like somebody playing a game looking for the next trigger.

 

I’m quite glad to have played Tacoma; and I’m glad that Fullbright continues to push the genre forward, both mechanically and thematically.

(Side note: if it’s a tossup between you playing on Xbox and PC, you might want to choose the latter. I played it on the former, and while it was definitely playable, it may also have been the single laggiest/jerkiest console game that I have ever had the pleasure of experiencing.)

refining visionaries

October 5th, 2017

At Agile Open California this year, Volker Frank led a session about developing leaders within an agile organization. And it got me thinking: one way to lead is to see a possibility more clearly than anybody else, to describe that vision in a way that helps others see its beauty, and to help guide people towards a realization of that vision.

You hear about this in the context of the trailblazers leading teams in developing a revolutionary new product. But that’s not the only type of visionary worth celebrating: there’s also the power and beauty seen by those who have a vision of what’s present but latent in a situation. (Refining visionaries? Distilling visionaries?) Looking at a collection of code that’s effective in its own way but is harder to work with than you’d like, seeing an underlying structure that contributes to that code’s power, and then helping others see and bring out that structure. Working with a team that sometimes surprises everybody with what it gets done but that, more frequently, is stuttering and stumbling; helping the team figure out what’s going on during the good times that’s absent in the bad times; and helping them set up a context that reinforces the good times.

I don’t want to minimize the power of visionaries who open up new possibilities; but if you’re always looking for something new, you won’t be living with your visions long enough to do any of them well. And I suspect there are psychological consequences, too: if you’re always looking for the next thing, then that reinforces a “grass is greener” outlook. So, while being static has the risk of settling for something that’s bad for you, this latter, “refining” sort of visionary can help turn that relative lack of motion into a positive characteristic, actively finding and nourishing the good in wherever you are.

And these refining visions are one that agile practices reinforce. Most notably in the practice of refactoring, of course: you’re explicitly not changing the behavior of your code, you’re just making it better. Testing, too: tests are a way of reifying one aspect of your vision, helping specify the behavioral aspects of where you are right now.

 

Of course, the distinction between these two kinds of visionaries is hardly cut-and-dried. At first I was going to say that mathematicians and scientists are refining visionaries, for example, because they’re finding regularities and rules in examples present in the world, but that’s far too simplistic: I can’t characterize Grothendieck’s vision of a new approach towards the foundations of geometry as just a distilling of prior examples.

And the use of techniques in service of these visions isn’t cut-and-dried, either. I mentioned testing above in support of refinining vision; but agile practitioners also use tests to help move the behavior of code forward. One thing that does characterize agile methods, though, is its preference for small movements: incremental design, and delivering value continuously rather than discretely.

So, if an agile team is going to be looking for a single type of visionary, the sort of visionary that would help the most is something in between, but one that (compared to non-agile contexts) is relatively weighted towards the local, refining side. By all means, have a vision of a promised land off in the distance. But don’t spend your time living over there: spend your time figuring out what the next step is that you hope will lead in that direction. And, while making that next step, pay close attention to your center of gravity, and don’t let it shift too much on any single movement.

 

Probably better still, though, is for an agile team to have many visionaries on it, instead of a single visionary leader. Some have clearer visions of a new world, some look particularly closely at the local terrain, but all can work together to take that next step.

the last guardian

October 2nd, 2017

If you’d asked me a couple of years ago, I would have guessed that either The Last Guardian would never be finished or else it would come out as somewhere between a disappointment and a disaster. And I would have been wrong: The Last Guardian is a Team Ico game through and through, not least in what it shows me that I’ve never seen in a game before.

It certainly looks like a Team Ico game: like Ico, with the buildings that you wander through; like Shadow of the Colossus, with a lovingly rendered large creature that you clamor over. (Only one this time!) I suppose, if I had to compare The Last Guardian to one of those two, it’s more like Ico: you platform your way through buildings, you have a companion, and it doesn’t have the formal austerity of Shadow of the Colossus. But it’s really not particularly like either of them, because of the aforementioned creature, Trico.

 

And I’ve never seen anything like Trico before. Or rather, I’ve never seen anything like Trico before in a game: part of the miracle of The Last Guardian is how much interacting with Trico feels like interacting with a dog. Trico has its own motivations, its own interests: it wanders around playing, exploring. But, balancing that, the game also quickly sets up a pack dynamic, with the two of you very much focused on each other: you have to provide food for Trico right at the beginning and care after its wounds, and Trico quickly decides that you are its person.

So, despite the aforementioned exploration, Trico gets nervous when you’re out of its sight for any period of time (and I felt bad when I was away from it!), and if you’re in danger, Trico immediately and unquestioningly flies into action to protect you. This sort of dual, asymmetric responsibility is something I’m very used to with dogs: as the human, it’s your job to make decisions and do certain kinds of providing, but both of you look after each other on an emotional level, and you know that caring for you and protecting you is one of your dog’s (or your Trico’s) foremost cares.

 

The Last Guardian isn’t just the best pet togetherness simulator that I’ve ever seen in a video game, though: through those interactions, it gives a new lens on and a new solution to some areas where video games have traditionally stumbled. One of those is the puzzle box nature of interacting with NPCs: games are full of NPCs where, if you press the correct buttons, they’ll give you something, in ways which actually lead in pretty creepy directions when translated into real-life terms. (Romance options in games are particularly prone to this.) I end up being more impressed sometimes by NPCs in games that don’t give you something no matter what buttons you press, but even that frequently feels more like an acknowledgement of the problem than an honest solution.

But Trico didn’t feel that way for me. You can ask it to do things (just to come over at the beginning, more complex things later); Trico will usually do what you want, but not always. Frequently it’s wandering around, looking at stuff, doing its own thing, acting like a creature with its own internal motivations. And, when you’re in danger, Trico responds immediately (modulo one psychological barrier the game presents), without being asked, because that’s what you do when somebody you care about is being hurt: you go help them. (Similarly, after the battle was over, I’d immediately cuddle with Trico, check for wounds, and cuddle with it some more: it’s not that the game is forcing me to do that, it’s just that that’s what you do.)

So: how did the game succeeded so well in avoiding the puzzle box trap? Partly, of course, because of the care that they put into Trico: your interaction is the main focus of the entire game, and when such a talented team focuses on something like that, good things will result. But I also think that replicating the pet dynamic turns out to be a surprisingly good target: pets have enough of an internal life to be able to behave like their own creatures instead of state machines responding to inputs, but they’re simpler than humans, so the seams don’t show nearly as much. Also, it helps that the game establishes a core assumption that both of you care about each other very much, so certain behavior doesn’t have to be justified.

 

The other game concern that The Last Guardian sheds light on is violence. In most games, your character is a psychopath and a mass murderer; game context justifies that behavior, but almost never seriously interrogates it. In The Last Guardian, though, the violence is largely delegated to Trico: you sometimes knock down enemies, but ultimately Trico is a much more capable combatant than you are. And the game does interrogate that violence: partly (as the game goes on) in a way that I have seen in some games, by revealing the external forces that have made Trico what it is, but, much more importantly and rarely, by showing Trico’s reaction.

When I said above that I always cuddle with Trico, I said that the game isn’t forcing me to do that: that’s true (I think, I never tested it!), but it isn’t the whole truth. Because Trico seems genuinely shaken up after each battle: its reaction doesn’t (just) seem like an adrenaline high, it seems like a genuine discomfort with what’s happened, and a discomfort not just with what has happened to it but with what it has done.

You even see this in the special action that Trico has in the beginning, where you can use its tail to shoot lightning. Even when you use this ability to destroy environmental obstacles instead of to attack enemies, Trico doesn’t feel comfortable with what just happens: it’s a lens on the violent-behavior-as-shaped-by-external-forces scenario that I’m not at all used to seeing. (Imagine an RPG where, every single time you used a spell, you were shaken up by what you saw revealed in yourself!)

But Trico seems more bothered by its fights with the magical armored warriors than by its use of lightning. And this is a very real reaction, that you can interpret in many ways: maybe battles traumatize Trico because of the dangers to Trico, maybe battles traumatize Trico because of the dangers to you, maybe battles traumatize Trico because of what Trico sees in itself. Whatever the case, Trico needs comfort after every battle.

And, initially, the battles are mercifully rare. In the latter half of the game, though, they become more frequent; they’re never normalized, either to Trico or yourself, but you can see steps in that direction. Which, in turn, is its own lesson on the horrors of violence: you can see an important part of both of your cores getting buried, it feels like a loss, it feels like a scar, it feels like you’ll probably need therapy later.

 

The game does more: in particular, it weaves in context about what led to the current situation, how you and Trico got here and where you both came from. And then there’s the ending, which is the one aspect of the game that I question: it feels gratuitously dark to me, and I also neither like nor agree with what the ending says about your and Trico’s relationship. But, the (relatively minor) blemish of the ending aside, the game is a masterpiece: each Team Ico game shows me things I’ve never seen before, things that in retrospect were important at a fundamental level, and I’m not convinced that The Last Guardian won’t end up being the game of theirs that matters the most.

the legend of zelda: breath of the wild

September 17th, 2017

Breath of the Wild is, of course, a stunning game. And a surprising one, both in how it departs from Zelda tradition and in how I reacted to those departures. No more progressive unlocking of weapons/tools/areas, no more restricting those areas to your specific skill set / power level (at least after the first two hours of the game), no more mindlessly whacking away at mindless enemies.

Which could have been a problem for me: I like the well-crafted Zelda unlocking experience, and I don’t like scarcity mechanics in games. Also while there are games where I like to focus on skill, most of the time I play games for other reasons, and skill development has certainly never been my focus when playing Zelda games. So even in the opening plateau, I was a little nonplussed by the cold mechanic and its associated scarcity: I didn’t have a lot of hot peppers, the mountain wasn’t small (a least when starting the game; in retrospect, it was tiny!), and the bridge that I assembled to get across a frozen river was a little fiddly, especially given the clock ticking down from my cold resistence: do I really want to feel on edge like that?

 

Obviously scarcity didn’t turn out to be a problem in practice on the plateau, and I didn’t seriously exect it to be. But the scarcity mechanics continued over the course of the first quarter of the game: you don’t have a lot of weapon slots and weapons are constantly breaking, and you don’t have a lot of hearts either.

I turned out to get along with that surprisingly well, though. Partly because Breath of the Wild is a Zelda game: I had faith in the game’s designers to give me a fair amount of room to play with, instead of creating a game that only the hardcore would love. Partly because, for the two most clearly present scarcity mechanics, it was reasonably clear that scarcity wasn’t going to lead me into a pit that I couldn’t dig out of: I wish I had more weapon slots, but enemies drop weapons as well, so I didn’t see any reason to worry that I’d actually run out of weapons: it was more an issue of not having my favorites at any given time. (Also, I’d started the plateau without any weapons at all, so I had some confidence that I could recover!) And, as to hearts: sure, you might die, but that doesn’t set you back very far, so it didn’t take me too long to accept death as just part of the game.

Digging into dying a bit more: if you’re seriously worried about dying (and there are certainly monsters that you’ll run into that you’re not equipped to handle in the early game), then going around enemies is almost always a viable strategy: the open world means that paths are available, the lack of an experience mechanic means that you don’t get punished for not fighting. Alternatively, if you lean into fighting while low on hearts, then that gives you excuses to work on combat strategies, which one of the plateau shrines teaches you. So if you want a skill-based game, it’s there.

 

The upshot was that I rather enjoyed that first quarter of the game: I had to sneak around a little more than I would have liked (e.g. during the approach to the Zora domain), but I got into a decent amount of fights, and in general I didn’t feel that I was being prevented from exploring the world. And there were periodic pauses for me to learn more about the world (with two towns in particular as punctuation), and the non-combat shrines are almost entirely level-agnostic.

It took me longer than I expected to solve the puzzles in the Zora divine beast dungeon, but I managed that without walkthroughs, and I learned something concrete in doing so, that I had to be a little more systematic in my thinking about the tools that the plateau had taught me. And, in general, I was learning about the mechanics that the game provides, and the ways that those mechanics combine: one of the really remarkable aspects of Breath of the Wild is the way the game takes a relatively small number of systems, gives a relatively small number of variables within those systems, and combines them in as orthogonal ways as possible. (Leading to the chemistry system that cooking uses, or the way that you can survive cold either by wearing warm clothing, eating something cooked from a warm ingredient, or having a flame sword as your currently equipped weapon.)

I did, of course, feel underpowered when fighting the final boss in that first divine beast, and that’s one fight that you can’t avoid. But I used that as an excuse to work on my combat: in particular, he had some powerful moves that were fairly well telegraphed, so they were a good excuse to work on my dodge jump plus counterattack. It took me quite a few tries to beat him, but I succeeded, and felt proud in doing so.

 

The game shifted significantly for me after completing that divine beast: getting a heart container for completing the dungeon helps a small but (at that stage in the game) noticeable amount, but much more importantly, you get an ability that causes you to resurrect when you die with slightly more than full health. There’s a cooldown on that resurrection ability of course, but the combination of those effects meant that my health bar effectively almost tripled in length. I certainly won’t say that I stopped dying, but it was much rarer; also, by this point, I had a decent understanding of the basic systems of the game.

The upshot was that the game had changed from one of scarcity to a one where I could relatively confidently wander around. I certainly still wished that I had more weapon slots (heck, even at the end of the game I wished I had more weapon slots!), but I had enough that I didn’t have to worry about weapons breaking, it was just more of a nagging feeling that I wished I had slightly more weapon options, or that I could keep a torch on hand instead of hoping I’d find one when I needed one.

There were still environmental issues that I was more affected by than I’d like (e.g. I didn’t have good armor to protect against cold), but by that point I felt like I understood the systems: I could use food to deal with issues like that in the short term (and I’d cooked lots of food!), and I had faith that, if the game was going to make me spend lots of time in a specific sort of hostile environment, then it would give me better tools for dealing with that environment. (Presumably by letting me purchase armor; on which note, by this time, I was starting to accumulate a decent amount of money, instead of feeling like my purse was always running empty.)

 

At this point, the game became magical, or at any rate changed the tone of its magic. I understood the range of basic experiences the game had to offer me, and I could now make an informed choice between them: combat isn’t my thing, so I didn’t have to focus on it (though admittedly letting my combat skills rust hurt me when it came to the end of the game), but I really liked exploration, so I could spend resources improving my stamina meter, wear my climbing gear, and climb all over the place.

And: what a place to climb, what a world to explore. The world of Zelda feels organically alive in a way that, in my experience, has almost no parallel; Shadow of the Colossus, perhaps, but I’m not sure what else I’ve played that gave me this feel. Every hill feels like it’s in the right place, every tree feels like of course there should be a tree there, every river, every mountain.

And, like Shadow of the Colossus, every ruin; but, unlike Shadow of the Colossus, of course, there’s quite a lot of life. In the wrong hands, peppering the landscape with activities would feel forced: villages placed because we need a plot hub or a side quest, resources popping up every few steps because we need crafting, and so forth. And, the thing is, Breath of the Wild has all of that, but somehow it works! I have no idea why the resource gathering didn’t anger me the way it did in Dragon Age Inquisition, but it didn’t; I have no idea why the Koroks felt like an exciting magical part of the natural world instead of artifical stimulus designed to mask the designers’ lack of confidence in the inherent interest of the world, but it did.

Hmm, actually, I probably answered that last question as I was in the process of writing it: the fact that the basic geography of the world is so well done means that embellishments don’t come off as covering up flaws, because they aren’t. I’m not going to go all Christopher Alexander here, but I suspect that thinking about the world as a natural geography that gives rise to centers that plants and animals (including intelligent beings) successively embellish makes those embellishments a source of joy, despite their instrumental nature.

The contours of hills, mountains, and water in turn leads to trees (sometimes working together as peers, sometimes standing out on their own as punctuation), grasslands, and yes, even mushrooms that you can use for your cooking. And that not only makes for a natural home for animals, it also means that Koroks fit in not as rewards for the player but as creatures who have a special appreciation for particularly wonderful parts of the geography, or who simply like to play around with the world around them. And, of course, humans and the other species fit in as well: their roads fitting in among the contours of the land, their bridges, their stables, their towns, and the ruins where they’d once had a flourishing society. With Shadow of the Colossus, we saw what would happen if you punctuate a living topographical landscape with a few, high impact centers; with Breath of the Wild, we see what happens if instead we have the centers be much more pervasive, at many more levels of scale.

 

I’ve never played a game like this; and I’ve certainly never played a Zelda game like this. Though, having said that: Zelda at its best has brought life to its worlds in ways that few other series can match. Ocarina of Time treated its landscape and its locations with love and care as well; Majora’s Mask brought out the living rhythms of a city. Breath of the Wild is remarkable in the scale of the living world that it presents, and in the way it proceeds by combining systems; but of course there’s a lot of authoring in Breath of the Wild, too, we’re not talking Minecraft here.

And, for that matter: there is one aspect of the authoring of Ocarina that I actively miss, namely its music. Every time I passed by a stable in Breath of the Wild, I felt at home, and that’s entirely due to the power of Ocarina’s music still going strong two decades later. I’m not saying that Breath of the Wild made the wrong choice to not emphasize music as much: that’s a natural fit for the less-authored experience that it presents, and its sound design is very good in its own way. It’s just a reminder that, while Breath of the Wild feels to me a lot like a local maximum in the design space, it’s not the only possible local maximum: there are other ways in which games can nourish my soul.

I’m very happy that Nintendo is showing this year that they remain experts at navigating design spaces, in ways that bring delight and sustenance. I’d been worried that the company was in decline, but no longer: now I’m just glad to have the privilege of experiencing their works.

free speech and responsibility

September 3rd, 2017

In Germany, it’s illegal to display Nazi symbols and symbols of similar nationalist parties, and it’s illegal to be a member of such organizations. Which, as an American growing up under the influence of current U.S. free speech law and under the ACLU’s defense of Nazis in the Skokie case, mostly seemed wrong to me.

This year, though, even before Charlottesville but especially after that, I’d been less sure that Germany’s approach was wrong. I like general rules like free speech absolutism; but we’re talking about banning Nazis here, do I seriously think that that banning Nazis leads to worse outcomes than letting them march around?

 

Thinking about it a little more, I can come up with two basic arguments in favor of free speech absolutism. One is a belief in the power of the marketplace of ideas, combined with the existence of examples of ideas that I now support that were once considered morally and politically abhorrent by many. I very much think the Catholic Church was wrong to sentence Galileo for heresy; I’m not confident that I don’t have similar blinders myself, and I’m certainly not confident that our lawmakers don’t have similar blinders. And I do have some faith in the ability of humanity to move in a more moral direction; if you combine those two, then an absolutist approach to free speech looks pretty attractive.

The other argument is based on a combination of slippery slopes and power dynamics. If you ban X, then it’s tempting to ban things that are similar in some ways to X; and then my concern is what actually gets banned in practice starts to get strongly shaped by power dynamics. The concern here is that what starts as, say, a law against hate crimes against LGBT people turns into a law against negative speech based on sexual orientation turns into straight people using the law against gay people who say things that straight people don’t like. I can say that that’s ridiculous, that considerations about oppression have to take into effect structural power dynamics; but those of us with structural power have a strong vested interest in not having such considerations at the fore.

 

Both arguments come with responsibilities. Yes, in general I think that good ideas drive out bad, but it’s not a passive process: people have to fight for the good ideas, fight against the bad ideas. So, if Germany were instead to have adopted a pure free speech approach, a moral imperative would come along with that: keep the horrors of the Nazi regime in the front of people’s minds, to make it harder for people to pretend that it’s a less objectionable form of nationalism. (And, as it turns out: Germany has done this as well, they’re covering their bases.)

Whereas, for the slippery slope argument, the onus is on the other side: can you draw a bright line to set off the ideas that are so bad that they’re considered beyond the pale, to prevent more and more ideas from getting banned? Here’s my best candidate for a bright line: an idea that led your country into a war that it lost, and that in retrospect you feel was horrific from a moral point of view, is worthy of consideration to be banned. Because there aren’t going to be many ideas like that, and any idea that satisfies that criterion has been seductive enough to be actively extremely dangerous, and hence a candidate for extra measures against it.

 

So: even though I’m still pretty sympathetic to free speech absolutism, I can’t convince myself that Germany has made a bad choice here: what’s the concrete harm that comes from their banning Nazi symbols? But, of course, I’m not German, I’m American. Should we make the same choice?

If we were to ban symbols, the argument above would mean that those symbols should represent something horrific in our past that led to a war that we lost. (To be clear, I’m not making an argument in this post that the US should ban Nazi symbols: I think that’s worth considering, too, but that’s a war that we won, and I’m rather more nervous about winners treating their victories as inherently moral than I am worried about losers using their losses as an opportunity for moral reflection.)

And there’s one obvious candidate, though of course it’s not a perfect fit for the above criterion, because, depending on who you think of as “we”, it’s a war that we both won and lost. (I grew up in the North, not the South.) Namely, the Confederacy, and symbols of similar white supremacist groups, e.g. the KKK. Slavery is the United States’ great moral stain, and its aftereffects are not just still being felt but are actively being propped up a century and a half later.

 

On the one side have the First Amendment; I can imagine a version of the Fourteenth Amendment that would have taken a stronger stand against membership in white supremacist groups, but we didn’t make that choice. That means that this is a hypothetical argument, since our Constitution is on the side of free speech absolutism; and, as said above, that in turn imposes responsibilities.

Which, as a nation, we have failed abysmally in. The fact that I didn’t have to learn much about Reconstruction in school is, itself, a sign of that failing; my impression, though, is that Reconstruction had a lot going for it, but we gave up just over a decade into the process, and the South fell back very quickly into an extreme white supremacist society. Jim Crow continued until a full century after the Civil War ended; and, even now, we have a New Jim Crow with a shocking proportion of the African American population of the country under direct police control.

And, at the same time, explicitly pro-Confederate symbols and historiography are lamentably common. With that comes a recasting of the Civil War as being about “states’ rights”, without placing front and center the fact that the primary right that the Southern states were fighting for was the right to have slaves, a “right” which which is in fact horrifically wrong.

Also, Trump voters are apparently significantly more likely to think that white Americans are more discriminated against than black americans. Which is the slippery slope / power dynamic problem that I was worried about above; its presence here, though, suggests that I shouldn’t think about it primarily in the context of speech bans, because it’s happening anyways? Though I’m sure it would happen in the context of speech bans, too: so I guess it’s another argument for keeping the horrors of white supremacy present enough that we can’t sweep them under the rug.

 

We’ve fucked up as a nation, and are continuing to actively do so. And, at this point, I have no patience for arguments about freedom of speech and listening to all sides that treat this as an abstract question, divorced from our history and the ongoing affects of that history.

ipad orientations

August 13th, 2017

The iPad can be used in either portrait or landscape orientations. Different iPad interactions have different natural orientations: if the interaction involves video or (usually) images, then the natural orientation is landscape, because you want to fill up most of your field of vision. (So TVs are wider than they are tall.) But if it involves text, then the natural landscape is portrait, because that lets you focus on as much text as possible without requiring your eyes to scroll horizontally too much. (So books are taller than they are wide; and particularly wide text formats, like magazines and (especially) newspapers, frequently use multiple columns.)

That means that you might want to switch orientations depending on what you’re doing; Apple had the device switch orientations if you turned it on the side, but the initial iPad models also included a rotation lock switch for people who wanted a fixed orientation. As somebody who is interacting with text on my iPad the vast majority of times that I use it, I leave the rotation lock switch on (unless I’m watching a video): having the device switch to the wrong orientation when you hold it close to horizontal is REALLY FREAKING ANNOYING. Every once in a while, I try it with rotation unlocked; I usually last for about two days before giving up and going back.

Apple, however, decided that the switch wasn’t pulling its weight, so they got rid of it in recent iPad models. (There was also one period when they decided the rotation switch should act like a mute switch; that was just weird.) I assume this was at about the same time they added a control center with relatively easy access; and I agree, using the control center to turn off rotation lock isn’t horrible. But it’s more work than flipping a switch; also, I’m usually doing this when I start watching a movie, and that’s exactly when I don’t want something extraneous appearing on the screen. (Which Apple apparently doesn’t care about too much, as evidenced by the positioning and opacity of the iOS 7+ volume indicator.)

Nothing I can’t live with, but honestly: I think that, if the new iPad Pro models had added a rotation lock switch, that would have pushed me over the fence to buy one, I care about the rotation lock switch at least as much as most of the new features that they did in fact introduce.

 

More recently, Apple’s been improving its multitasking support for the iPad; and many multitasking features only work in landscape mode. And, with the iPad Pro models, they added a new keyboard connector; it’s on the long side of the device, which means that it only works with keyboards in landscape mode.

I can see why Apple made these choices: if you want to run two apps side-by-side, then you need horizontal room, and I can imagine people using the iPad for more serious work do need to do that. When I look at iPad-in-a-horizontal-case configurations, though, it just looks to me like a laptop; I’ve got a laptop, though, and that similarity just pulls me towards using a laptop. Whereas the iPad when held in my hand still feels different and magical to me: it’s a piece of paper that can turn into anything.

Which is fine, I guess? I still get lots of use out of my iPad as-is; and I imagine that, if I took up drawing, it would feel pretty magical doing that, too. So why worry about the fact that, when I’m typing, I’m drawn to a more traditional computer? And maybe that’s the answer.

But I’ve switched to a simpler text editor when writing blog posts; and that is a situation where the “magic sheet of paper” analogy feels to me like it would work well. And it’s a situation where I want to work in portrait mode: I want to see more rows of text in a narrower column rather than few rows of text in a wider column. (I don’t need side-by-side multitasking there, either; I occasionally switch to Safari to find a link, but I wouldn’t need Safari visible at the same time as a text editor.) I can even imagine that it would be useful to take the iPad off of the keyboard and hold it in my hand when editing, to have a physical shift that models the desired conceptual shift.

When writing blog posts, I am usually sitting in a chair, with the laptop on my lap; that could be an issue, in the past iPad keyboards that I’ve used haven’t really felt stable in that configuration. Maybe keyboard technology has improved since the last time I looked; but maybe that’s another sign that I should just stick with a laptop.

 

Or maybe I’m looking for a solution to something that’s not a problem: laptops work great for me for writing, iPads work great for me for reading. I just hope that Apple doesn’t keep on going farther in a direction that emphasizes landscape over portrait: Apple Maps has one design decision in particular that makes very little sense in portrait mode, which makes me worry that they just don’t care about portrait mode iPads these days, especially iPads that are locked in portrait mode instead of flipping orientation as you rotate them.

Then again, people like to worry about Apple not caring about this or that any more; most of those worries end up not happening, and most of the time, when they do come to pass, the outcome turns out to be better anyways. So I shouldn’t spend too much time worrying about it…

what remains of edith finch

August 6th, 2017

I wish I had something coherent to say about What Remains of Edith Finch: it’s a rather striking game, I just can’t put my finger on why?

Which, maybe, is a reflection of the game itself: it’s more a collection of little games than a single game itself, so why should I expect myself to be able to write about it coherently? We were talking about it last week in the VGHVI Symposium; coming in, if I’d thought about it much I would have labeled Edith Finch as a walking simulator, but once you get past the introduction, that label really doesn’t fit: the walking simulator part of it is a frame story, the internal games built on ancestor’s stories are foregrounded much more.

I actually wonder if the initial story is intended to explicitly play with that concept: Edith Finch isn’t a walking simulator, it’s a scampering-along-branches simulator, a flying simulator, a slithering simulator! (There are a lot of control schemes in the game.)

 

Another question which the first story explicitly asks is: how much of what you experience is real, how much is a hallucination or otherwise imagined? To be honest, that question is not entirely to my taste: I like works of art that don’t put boundaries between the realistic and the fantastic, and when confronted with such a work (Totoro, say), I take it as it is: it generally doesn’t cross my mind to even wonder how I should be interpreting the fantastic segments in light of the non-fantastical aspects of the world. Though that initial story is somewhat of an outlier in that regard in Edith Finch; I’m happy to see that story as a source of questions for people who want to approach the game in a mood of figuring out what really happened in the situations represented by the stories we see (and, for that matter, what really happened in the family outside of the stories), without emphasizing the question so much to people like me who aren’t in the mood to grapple with such questions.

Which reinforces my hypothesis from before: the game encourages an impressionistic approach, throwing off handholds that you can choose to grasp or to leave behind, that you can choose to link or to let stand alone.

 

To be clear, that doesn’t mean that there’s not real substance in the Edith Finch. It touches on some pretty serious subjects; and some of those subjects are ones that, frankly, are ones that I’m not entirely sure I want to spend too much time confronting directly in art this summer. Sometimes, that means that I’m seeking out art works that avoid those topics; sometimes it means that I’m engaging with art works that confront them more directly and wishing that I hadn’t.

But Edith Finch’s more oblique approach has a real virtue for me: it approaches subjects lightly, making those subjects available should I choose to engage with them, but also letting me gracefully skirt around them as I choose, acknowledging their presence but letting me keep as much detachment as I wish.

 

It’s a very impressive second game. The Unfinished Swan had a neat mechanical idea at its core, but while I was glad that it was trying to approach a serious theme, I wasn’t so sure about the way it approached that theme or even the choice of them itself. Edith Finch shows that neither the mechanical inventiveness nor the desire to confront real issues was a fluke; with it, I think the studio is really starting to put something together.

open offices

July 31st, 2017

Over the last week, I saw several attacks on Apple’s new offices, responding to information from this Wall Street Journal article by Christina Passariello: a Six Colors article by Jason Snell; a Daring Fireball (John Gruber) link to Snell’s article plus a, uh, smug follow-up; and a take from Anil Dash.

What surprised me was the definitiveness with which these takes asserted that open offices are bad: for example, Dash says right up front in his headline that open offices are “something their programmers definitely don’t want”. And the reason why this surprised me is that the intellectual tradition about software development that I’ve found most informative comes to the polar opposite conclusion, that shared working space is good and individual offices are bad; and my personal experiences also hasn’t backed up the idea that individual offices are clearly superior for programming. So, while I don’t expect everybody or even most people to agree with me either intellectually or in their lived experience, seeing multiple takes claiming that it’s obvious that the opposite view is correct was a reminder of how different the worlds are that different people live in.

But hey, maybe things have changed over the last fifteen years, or maybe I hadn’t thought through the beliefs that underly my assumptions. So I figured that it’s a good excuse to write up where I’m coming from. Note, though, I am (mostly) not saying that a) people are wrong to not prefer open offices, b) open offices are a good fit for Apple, or c) Apple is doing a good job with open offices. I’m mostly just interested in sketching out the underlying assumptions behind the two points of view, to understand what is underpinning each of them.

 

With that preamble out of the way, I think this sentence from Snell’s piece is a good place to start:

Sometimes I think people who work in fields where an open collaborative environment makes sense don’t understand that people in other fields (writers, editors, programmers) might not share the same priorities when it comes to workspaces.

I’m not a professional writer or editor, but his statement there feels true to me for those fields; as a programmer, however, that statement felt bizarre. When programming, I’m working with a group of other people to produce a piece of software that I couldn’t come close to producing by myself and where I don’t want outsiders to be able to tell which parts were done by which people; to me, programming is a quintessentially collaborative field. (Yes, I realize that solo software projects exist, I’m not talking about those.) So why wouldn’t we want our environment to reflect that collaborative nature?

 

The software development methodology that I feel has worked this line of thought out the best is eXtreme Programming (XP). XP is very focused on breaking down boundaries within a team: for example, code is owned by all of the developers on the team instead of having individual developers own different parts of the code. XP also promotes fast feedback: short cycles even within your daily and weekly development rhythms, frequent releases, and frequent back-and-forth between the development side and the product side of the team.

There are a few reasons for the focus on shared ownership. One is that nobody has a monopoly on the best ideas, even in an area of the code that they know very well; so let everybody contribute. Another is that it allows ideas to pollinate, with an idea over here bearing fruit over there. A third is reducing risk: you can’t reliably figure out in advance which ideas are going to really catch on, and if you want to be able to follup on the successful ones, you want as many people as possible to be able to help; also, team composition changes, and you don’t want to be screwed over if somebody leaves the team. (This is gruesomely known as maximizing your “Bus Number”: the largest number of people who could be hit by a bus and have your product survive.)

As to fast feedback: you don’t really know how a decision will turn out (whether a micro one, like a function name, or a macro one, like a new product feature) until the decision has borne fruit: so get to that state as quickly as possible! A key point here is that product development speed isn’t necessarily the best metric: going very quickly in the wrong direction, without being able to course correct for weeks, is going to turn out less well than going at a more measured pace but being able to course correct multiple times a day.

 

As a result of this, XP explicitly recommends that the entire team (not just programmers, product people as well!) sit in a common space. From a fast feedback point of view: you can get design feedback (whether from another programmer or from a product designer) most quickly if they are literally right there next to you. And yes, that level of proximity really does make a difference: any physical distance or lag in response time noticeably increases the chance that a programmer will go ahead with what makes the most sense to them, instead of involving somebody else, I’ve seen this repeatedly.

And, from a shared ownership point of view: sitting together obviously has symbolic value. But it also means that there’s no barrier to to people working together impromptu as they discover that that’s appropriate; and it means that the natural location for design artifacts (whiteboard scribbles and the like) is in a shared space. Also, overhearing conversations means that you’ll learn something about code that you might be working on next week or even later in the afternoon; or you might overhear a conversation where you realize that you have something of value to contribute, and you can jump in.

 

The flip side of that ambient conversation is that it’s noisy, it can make it hard to concentrate. One way that XP attacks this issue is through pair programming: it turns out that two people working together can tune out outside noise (while not completely disconnecting from their environment) better than one person working solo. Also, it turns out that two people, when interrupted, can get back to full speed on their task more quickly than a single person can, because they can leverage both of their partial mental states.

And pair programming helps with the other goals that I mentioned above. It obviously helps with shared ownership, not only by making a symbolic statement but by giving a high-bandwidth route for knowledge sharing. It even helps in a more subtle way: one surprise that I had when I first started Pair Programming was that, when working with somebody else, when we got to a thorny bit, it would take us about 10 minutes to say “we should ask X for advice on this” in a situation where, when working alone, I’d probably bang my head against that same issue for an hour. And, as to fast feedback: the fastest feedback is from somebody who is in the thick of the problem with you, and pairing largely eliminates the need for a separate code review step because code reviews are instaneous.

There are other XP techniques that help with working in shared spaces, too: I’ll call out test-driven development in particular as helping minimize the negative impacts of interrupts, because it encourages you to work in a way where, at any given point, you have one very clearly stated next micro-problem that you’re trying to solve.

 

XP is a couple of decades old at this point, but I don’t think anything I’ve written above is less applicable now than it was when XP was being created. And, in terms of newer software development trends, I want to call out DevOps: more and more of us are working in a world of cloud software operated by the same teams that are developing it.

And the last thing that I want in a DevOps world is individual code ownership, with people working in isolated offices. In those (hopefully rare!) situations where something is going wrong, I want as many people as possible to swarm on the problem, attacking it in meaningful ways from different points of view, getting it fixed as quickly as possible. And it’s really hard to do that if those same people haven’t all worked together on the software in meaningful ways in non-crisis modes.

Also, from a personal point of view: if I’m on vacation, I want to be on vacation, which means that the last thing that I want to have happen is for me to be the only person who can fix a problem in a piece of code. (Or, if it’s somebody else on vacation, the last thing I want to do is to have to choose between a bad situation for our customers versus my coworker having their vacation interrupted!) I strongly advocate against individual ownership in a DevOps situation, and shared space is really helpful.

 

So, to my mind, that’s what open offices are optimizing for: collective ownership and fast feedback. Whereas individual offices are optimizing for concentration: the ability to get into flow, and the ability to hold complex problems in your head at once.

And those are obviously good things! But I don’t see them as unalloyed good. Flow is great, it helps you work at high speed; the main question I have is whether that high speed has you going in the best direction. (And also, this is an area where pair programming helps as well: pair flow is a thing.) And, if you’re working on something that’s inherently complex, then yeah, you want to be able to hold it in your head; but better still to get that task done while making it less complex, which is where incremental progress, test-driven development, and refactoring come in.

At any rate: I think that both points of view are coherent ones, and can be carried off well. As a development team, pick what you want to optimize for; as an individual, pick what matters most for you; and then make it work in the context you’re in. You don’t have to carry out either plan in all of its force for it to work, either: for example, while in general both theoretically and in my lived experience I prefer the XP ideas, the truth is that I’ve spent very little time pair programming over the course of my career, and it’s been okay, I’ve still gotten a lot out of shared ownership, incremental development, test-driven development, etc. (And I’m open to the possibility that I would be a more effective programmer if I spent more time pair programming.)

 

A postscript on the Apple-specific questions here. First, I have no idea if Apple is doing a good job with their open offices; looking at the pictures, I can see spaces that look like they’re plausibly a good size for a single development team, but who knows, and I also don’t know whether those glass walls would mean that you’re constantly being distracted by other teams or if they would end up a welcome source of light. And I have no idea how representative the few photos in that Wall Street Journal article are of the campus as a whole.

In terms of Apple’s culture: I’ve never worked there or spent a lot of time talking to people who do work there, so I have the farthest thing from an informed opinion; Snell and Gruber have a lot more info there. (Though at least I do work as a programmer, not as a writer!) But, honestly, I’m dubious of open offices succeeding as a general rule in Apple’s development culture: this is the company that publicized the notion of Directly Responsible Individual, which is pretty much the opposite of the collective ownership approach that leads to open offices. (And I’ve heard multiple anecdotes about specific pieces of software been written by individuals, too.)

So if I were in that sort of culture, and if I knew that my neck was on the line for some specific piece of code, then yeah, I might want to spend time in my office working on that code instead of talking to other people: it might not turn out as well, I might make mistakes without realizing it, but they’d be my mistakes. And I wouldn’t be able to help other people as much; that would make me sad. So, all things being equal, I’d prefer not to work at a company like Apple that loves the idea of DRIs, so I might sort myself out of Apple.

I am curious how much the above still holds in current Apple, though. For one thing, Tim Cook seems a lot more focused on collaboration than Steve Jobs seemed to have been; maybe that’s filtered down through the company. (Though I haven’t heard about the DRI concept going away.) For another thing, Apple’s software has changed with the time: they run a lot more services than they used to (which, as per my DevOps comments above, says to me that shared ownership is the right approach), and clearly their OS development is much more incremental than it was a decade ago, with a regular yearly cadence and with significant changes appearing even in point releases. So it wouldn’t shock me if there are increasing numbers of software development teams within the company that prefer open working spaces.

 

Some Twitter thoughts from others that struck me:

monument valley 2

July 19th, 2017

Monument Valley 2 is basically just what you would expect from a Monument Valley sequel, with the added twist that the second character that they’ve added to it has the most amazingly charming movement animation that I’ve ever seen. I could try to go on about it for hundreds of words, but ultimately: if you haven’t played the first one, probably play it first; if you have and liked it, play this one too!

acupuncture

July 13th, 2017

Miranda has had very bad migraines for much of this year. We’re not sure why they’ve gotten so much worse / more frequent this year, and the initial treatments her doctors prescribed were almost completely ineffective, and in some cases may have made some aspects of the situation worse. Eventually, we found a medication which led to a more noticeable improvement (though also with a more noticeable side effect), but even with that we’ve gotten pretty desparate for effective treatment options.

So Miranda wanted to try out acupuncture. I wasn’t against giving acupuncture a try, given how ineffective Western medicine had been so far, and when I mentioned the acupuncture in a couple of random conversations, I got surprisingly strong positive reactions: people saying “I had serious migraines, and acupuncture made a big difference.” So I asked my fellow Tai Chi students for recommendations (since I figured they’d be more likely to have taken acupuncture than other social groups I’m part of), and made an appointment with one of the recommended acupuncturists.

 

As somebody raised by scientists, I do not feel entirely comfortable with this. Though, as somebody with a fondness for mysticism (and as somebody who is the sort of person who would take Tai Chi classes), there’s a part of me that’s favorably inclined towards this sort of thing; but that, in turn, raises my scientist part’s alarm bells even more, because it suggests that it will be harder for me to evaluate acupuncture dispassionately.

Now, when I say that I don’t feel entirely comfortable, that doesn’t mean that I think that trying out acupuncture is an actively bad idea. As far as I can tell, acupuncture is unlikely to be harmful, and Western medicine has done a pretty bad job treating her migraines so far: so it’s not like I’m comparing acupuncture against a treatment with solid experimental evidence for its effectiveness. And I also don’t feel like I have a super strong reason to believe that acupuncture shouldn’t be effective: we’re not talking homeopathy here. But I still do want to try to figure out how I should evaluate the acupuncture treatments, what I should look for.

And, of course, what makes that evaluation particularly different/interesting is that, in a Kuhnian sense, Miranda’s acupuncturist is working in a different paradigm than Miranda’s other doctors use, or than I’m comfortable with. (I get some experience with a Traditional Chinese Medicine paradigm in my Tai Chi classes, but I don’t feel that I understand it at all well even as an outside observer, and I’m certainly not able to act natively within it.) On the one hand, that makes me inclined to treat Traditional Chinese Medicine with more respect, with an assumption that there’s something to the richness of the paradigm, even if it’s wrong in some aspects. But, on the other hand, I’m not sure that’s a justified assumption at all: I also have the belief that there’s nothing of significant value in pre-Copernican astronomy, even though it’s a paradigm that was developed over the course of centuries! At any rate, the difference in paradigms raises the question of how I’ll be able to tell things that seem wrong because they’re from a foreign paradigm apart from things that seem wrong because they don’t work; or, for that matter, things that seem right because they’re explained in terms of a rich conceptual framework apart from things that seem right because they work.

 

Question zero, then: can we see any concrete effect at all from her acupuncture treatment? There was actually a surprising effect during her first session, namely that Miranda’s hands were a lot warmer. Which is enough to disprove a null hypothesis, but not directly relevant to her therapeutic goals; so next we turn to question one, whether we see an affect on headaches. And there, too, we have an answer: she also had something of a migraine that day (not a horrible one, but definitely noticeable), and her headache decreased significantly over the course of that first treatment, beyond what Miranda was used to from chance variation.

So that was a real success. The other interesting aspect of the session was what all it entailed: I’m not sure exactly what I expected, but I’d assumed that it would basically just be needles. It wasn’t, though: the acupuncturist had some neck exercises that he wanted Miranda to do while she was getting the treatment (and he encouraged her to do those outside of treatment, too, saying they would lessen the pain even without acupuncture), and he also did some physical manipulations with her arms as well.

 

That physical therapy aspect made me actively happy to continue. Because one aspect of current Western medicine that I don’t feel entirely comfortable with is its focus on pills and similar techniques: it feels to me like the (laudable) focus on experimental evidence for techniques imposes a bias that makes doctors less likely to focus on other techniques, techniques where it is harder to gather crisp experimental evidence.

So, while I’m happy for Miranda to keep on taking pills, I also don’t have a particular reason to believe that a chemical approach is the sole potential route to success; and if her acupuncturist is not only taking a completely foreign route (acupuncture) but pairing that with a different, less foreign route (physical adjustments), then that feels like it should increase the odds that, somewhere across all of the approaches we’re taking, we’ll find a treatment that works.

 

In that first session, Miranda’s acupuncturist was focusing on her neck, specifically on one of the vertebrae there, and that focus has continued: there’s something around one of the vertebrae that he thinks is enlarged in a way that causes problems, and he’s adopting techniques to try to shrink it. Which sounds totally plausible to me: I can easily translate vertebra problems into ideas like a nerve being pinched or blood flow being constricted, and I can imagine that that could affect migraines. (Admittedly, maybe I’m overindexing on spinal issues because of my own back problems.)

But I also just like seeing a combination of repeatedly focusing on one metric and seeing short-term pain relief actually result from the techniques that he’s using and recommending there. (Miranda reports that doing the neck exercises helps moderate the strength of headaches outside of acupuncure, too.) I like this because it gives a testable hypothesis; and I like that it gives me hope that it could provide long-term relief instead of just short-term relief, because if this vetebra hypothesis is correct and if he can shrink whatever he’s looking for there and have it stay shrunk then that should help the pain long term. (Which would give a positive answer to question two: does the treatment reduce headaches over the long term, not just the short term?)

 

The neck treatments are easiest for me to accept within my conceptual framework. But a lot of the acupuncture needles aren’t actually in her neck (in fact, I don’t think that normally any are there, though I can’t rember for sure): they’re on the top of her head, on her feet, on her back, or on her hands, and they’re in different places from week to week. When I asked about this, her acupuncturist explained it in terms of creating a path for the qi to flow, if I’m remembering correctly; I’m honestly not entirely sure what he was looking for to decide which pathways to enable which times.

And this gets back to the concept of working within different paradigms: clearly he’s working within a different one than Miranda’s western doctors. There are a few possibilities here:

  • The differences in needle positioning from week to week are all for show.
  • The differences are for a reason, but not a well-thought-out one.
  • The differences are a manifestation of his expertise within his paradigm, but that paradigm isn’t an effective one.
  • The differences are a manifestation of his expertise within his paradigm, and that paradigm is an effective one.

I could be wrong, but I’m fairly sure that the first two of these are not what’s going on: her acupuncturist does seem to me like he is an expert, I think we were probably fairly lucky to have found him.

It’s harder for me to decide between the third and fourth explanation. I would like to believe in the idea of qi; but I also have a hard time figuring out how there could be a concept like that that doesn’t map directly to some sort of standard Western medical concept (e.g. blood flow) and that we haven’t figured out how to make machines that can detect it.

So I can’t really justify that fourth explanation; we’ll see if, once I’ve done more Tai Chi, I’ll have had more experiences that cause me to believe in qi as a useful analytical concept, though.

I guess there’s a bifurcation of the third explanation, though: it could be that the paradigm is incorrect but effective? (Which, I guess, would mean that the qi explanation is wrong but he’s still doing something useful in putting the needles in different places at different times, and for deep reasons rather than just because, say, variation is effective no matter the details of that variation.) I’ll have to think about that more, though: it may be that saying “the paradigm is incorrect but effective” actually just means “the paradigm is an accurate paradigm but people outside the paradigm don’t understand it”. And that, after all, is the point of a paradigm: you need to shift into the paradigm to be able to understand it!

 

Ultimately, I just hope that the acupuncture is effective, because I really want Miranda’s migraines to become and stay manageable. Or rather, I hope that one of the treatments is effective; even if that ends up being the case, we probably won’t be able to tell which one made the difference (or if none would have made a significant difference alone and a combination was necessary). It would be nice to have a better idea of whether acupuncture works, but we’re not in a situation where we have the luxury of taking an approach designed to maximize scientific learning.

mass effect: andromeda

July 5th, 2017

Mass Effect: Andromeda starts off by dropping you into the middle of the action. You’re part of a large-scale colonization mission in a new galaxy, things have gone wrong upon arrival, you’re part of a team sent out to investigate, and your ship has crashed. This sort of opening is one of the series’ strengths: each game alternates between active sections and quiet sections, and when it’s on, it’s on.

The game puts on the brakes right after that beginning, though. You immediately learn a scanning mechanism, and so, instead of dealing with the consequences of the crash, you’re stopping constantly to look around in your environment. Which slows things down, but at least it does so in a way that fits into the narrative context: it’s your first time on a planet in a new galaxy, so of course you’re going to look around! And it’s not so far out of character for the series: Mass Effect games have always been dumping facts into your codex, so while it’s more extreme here than in prior games, it’s a difference in degree instead of in kind.

One of the codex entries that you have the option of reading discusses first contact protocols: you’re supposed to do everything you can to avoid shooting at new species that you meet. And, sure enough, a few minutes later, you meet an alien in a tension-ridden setting.

At which point the first contact moralizing goes out of the window: your options turn into shoot first or else let them shoot first and return fire immediately. Which wasn’t surprising: you’re playing an action series, it’s pulpy, it’s a lot more in character for the series to introduce an alien species that’s mysteriously evil than to build a game where finding ways to avoid shooting was a real possibility. I’d like to see a game that seriously grapples with that sort of First Contact question, but I don’t expect Mass Effect to be that game.

 

Equally unsurprising but more disappointing was what that intro mission gave me next: the alternate objectives. If this were from the original Mass Effect trilogy, you would have had a straight shot through the level, and the game would have used that to excellent narrative effect. But this is the BioWare that made Dragon Age: Inquisition; that meant an open world map, multiple objectives to choose from, and with some of those objectives in active conflict with the dramatic direction of the level.

I thought about skipping the objectives that didn’t fit into the flow: I didn’t trust the game to give me alternate objectives without losing the flow of the level, and I figured that I’d have another chance to explore the world later. Ultimately, though, most of the objectives were close enough to my path that I completed them. And, as it turns out, you actually can’t return to that initial world; I have no idea why.

 

I liked the game as a whole more than the initial segment; but this game really is not what I’m looking for in a Mass Effect game. Specifically:

  • They’ve added back in a manually managed inventory, and made it worse by sticking in a crafting mechanism.

The environments aren’t quite as littered with objects to collect for crafting as Dragon Age: Inquisition was, but it’s pretty bad: the scanning (which feeds into the research part of crafting) is a constant distraction, and there are ores to gather to feed into the construction part of crafting. And, of course, there’s an augmentation slot aspect of crafting, so you can’t even do a straightforward survey of weapon types and focusing on the ones that fit your playstyle, there’s a significantly more constrained resource on top of that.

  • The ability usage in combat was surprisingly restrictive.

They give you full access to the ability tree; there are some nudges towards limited specialization, but that’s fine, it still sounds like an improvement. Except that then there’s the way you use the abilities in combat: unless I’m missing something, you don’t have easy access to all your abilities during a fight, you have to put your abilities into loadouts, so for any given fight you can only easily get to three of your abilities. So, in practice, what that means for people who don’t really want to dive into combat is that we pick our three favorite abilities and never use any others, which is a disappointment: I don’t want to become an expert in the combat system, but I’d like to play around with it more than that, but instead a system that could have been freeing compared to earlier games turned out to feel more limiting.

  • The world building was way too pulpy.

I don’t expect a Mass Effect game to be the most subtle in terms of the questions it asks, but Andromeda is a step down, and the First Contact question that the opening sequence fails to range is at the core of that. You’re exploring a new galaxy, trying to build a home there with no backup; so if you run into trouble, you’re screwed. And it turns out that things go wrong right from the start: you don’t have to make trouble, it’s finding you already.

I didn’t like the way you had to fight with the Kett right at the start, but I can accept one unexplained bad guy. But there were these machines you have to fight, and I start to have more questions: we’re trying to learn about a new galaxy, focus on learning! And there’s a friendlier species that you meet in the galaxy; mostly you’re on good terms with the Angara, but there’s a group of them that you need to fight as well. And then then there are groups of criminals and other breakoff factions who came with you from the Milky Way: you get to kill your fellow humans too.

I could justify any of these individually, or maybe any of them other than the last one; but, except for the Kett, they all work squarely against the dramatic setup of the game. You’re a small, isolated group of settlers from the Milky Way, and are in an environment that’s already tearing you apart; given that, you need to understand your environment, you need to make allies, and you need to just stay alive! I’m certainly not going to claim that humans have historically always behaved peacefully when exploring new territories (quite the contrary), but this game isn’t setting up thoughtful historical analogies, either: to me, the thought process felt like “this is an action game, so we need to be able to shoot people, so shoot away”.

  • Too many fetch quests.

You show up on a new world, start at a big settlement, and everybody has something for you to do for them. And then you explore the world more, run into smaller settlements, and are given more tasks that actively work against the grand scale of the plot. (And then there are the cross-world tasks: find these minerals hidden in out-of-the-way places.)

Honestly, I’m actually surprised I didn’t mind this more: it turns out that, as long as I was given a reasonable emotional reason to go along with a quest, I was willing to do it. The only ones that turned me off were the ones that were telling you to do a certain number of a thing (find five crashed drones, or whatever); so I skipped those.

Which, you could say, is a strength of the game: in this as in many other areas, the game gives you a range of possibilities to explore, and it’s up to you as the player to decide what part of that range of possibilities you actually do want to explore. I think that’s mostly a cop-out, though: games as a whole give me a fine range of possibilities, so once I’ve picked a game to play, I want it to be the best of its kind of game that it can be, rather than being a half-assed mashup of lots of different options that it expects me to choose from.

  • The plot is either pretty good or pretty bad, depending.

It’s pretty good compared to the overall range of games I play; but this is a BioWare game, so a good plot is what I expect. And, compared to other BioWare games, this was definitely on the weak end: the overall story didn’t raise any interesting big questions (or, to the extent that it did raise such questions, it actively shied against taking them seriously), the companions and loyalty quests on average don’t reach the quality that I expect, and the major plot missions felt by the numbers.

 

Having said all of that: I’m still happy to have played the game. But I’m happy in the sense that the worst BioWare game is still a pretty good game; and this is the worst BioWare game I’ve played. This plus Dragon Age: Inquisition make it clear that the studio is going in directions that I’m not interested in, that play against their prior strengths, and that the new directions aren’t executed well enough to draw me in; and Mass Effect: Andromeda is a worse Mass Effect game than Dragon Age: Inquisition is as a Dragon Age game.

So BioWare is squarely off of my “will buy without asking questions” list. (Even for their RPGs; and I almost certainly have no interest in whatever Anthem turns out to be.) And I’m starting to seriously wonder to what extent BioWare still exists: it’s been long enough since they were acquired by EA for their previous culture and knowledge to have been significantly diluted.

They had a glorious run, though…

text editors and markdown

June 17th, 2017

When I got my new Mac, Sublime Text started occasionally crashing on me. And, while I do like Sublime Text more than Emacs for non-programming typing, I wasn’t in love with Sublime Text, either: it still feels like a cross-platform editor that wasn’t focused on presenting a clean interface. Also, at about the same time, I was thinking about moving my blog post writing outside of the WordPress web interface: to something outside of a web browser with a nice clean interface. (Not that WordPress’s focus mode isn’t good for that latter criterion!)

So I poked around a bit, looking at macOS-native editors that were focused more on writing than on programming. I wasn’t sure I was going to use it exclusively on prose, I was thinking I might use it to maintain the various lists and what not that I have as part of my GTD setup, but blog post writing was the main initial use case.

 

The editors that I came across supported Markdown. Which makes sense: I was looking at plain text editors, but that doesn’t completely remove the desire for styling and links and what not, and Markdown is the consensus choice there. But it was a change of pace, since, for this blog, I’ve actually been writing the little bits of the HTML in the posts (links, etc.) by hand instead of using WordPress’s rich editor; which is fine, it’s not particularly tedious.

But Markdown is fine too; and, actually, if my goal is to have a clean interface, than Markdown is better than HTML. The one thing that gave me pause there is that I use <cite> tags for book names and the like; the truth is, though, that while I was convinced by the arguments for semantic tags when I first saw them decades ago, in practice I’ve never wanted to style <cite> differently from <em> and I’ve never written code that goes through a document and pulls out the <cite> tags for bibliographic purposes or anything. So, ultimately, I don’t see HTML as about semantics any more; I’ll live with putting underscores around book names. (Using asterisks for italics still feels weird to me, though: asterisks should be for bold!)

 

The first editor I tried was Ulysses. It actually looked a little more ambitious than I necessarily wanted: it looks like it’s designed to let you write an entire book with it if you want. And I wasn’t sure if I wanted something with a multi-pane model, though given that you can easily hide the panes other than the one you’re writing in, that wasn’t really an active strike against it.

When I gave Ulysses a try, I enjoyed it: composing was pleasant, exporting to HTML so I could post to my blog wasn’t too bad. The main downside was that I couldn’t type raw HTML; that was while I was still unsure what to do about <cite> tags, but I got over that, but a little more problematic was that I’m used to having a paragraph with only &nbsp; in it to create an extra blank line, and I couldn’t figure out how to do that in Ulysses.

Still, seemed good enough; I figured I’d try other options, but if Ulysses was where I ended up, then great. Except that then I used it to edit my GTD reference file; and, when I looked at the file through another method, I found that it had moved all of my links that had just been pasted in raw to the end of the file (using one form of the Markdown link syntax), even in parts of the file that I hadn’t touched!

And that really wasn’t cool – partly because I don’t like that sort of messing around behind my back, but also partly because, ultimately, I want a plain text editor rather than a rich text editor. And, philosophically, it seems like Ulysses is a rich text editor: it uses Markdown as a representation format, but it doesn’t want you to care about that representation. Which could actually be fine for blog posts, but does limit the contexts in which I’d be willing to use the editor.

 

Next on my list to try was Byword. It’s simpler than Ulysses: no three panes, no focus on projects, it has you editing one file at a time.

And, it turns out, it’s much happier than Ulysses to accept whatever you type. If you type HTML tags or entities, it’ll pass them through unscathed during HTML conversion; and if I open a file with a naked link in it, it leaves the link there unscathed instead of moving it around.

Byword claims to be able to export to WordPress installations; when I was first looking at that, that was a paid extension to the app, but they made it free a couple of days later. Which is good, because it doesn’t work when publishing to my blog; I’d actually e-mailed Byword support when it was a paid extension to confirm that it should work for self-hosted blogs, but it doesn’t work for me. No idea what’s going on there; but the amount of work that would be saved by that is trivial, it’s very easy to copy and paste the HTML.

 

So I stopped my search there, and I’ve written my last ten or so blog posts in Byword. And it’s been nice! Honestly, not that much nicer than just writing in the WordPress editor directly, but still: I do somewhat prefer typing in a separate app, in a window with basically my words and nothing else.

Arguably more importantly: from a philosophical point of view, I’ve now switched to Markup as the way to go, instead of HTML or ad-hoc plain text. Slack and Github had been moving me that way anyways, good to have that formalized.

fire emblem heroes

June 13th, 2017

The question that free-to-play games always raise is: why am I playing this game? And I don’t mean that in a dismissive way, as an implication that playing them is a waste of time: if you can come up with a good answer to that question, then great! But free-to-play games do try to nudge you to keep on playing for their own reasons, so you always have to do a sanity check as to your motives.

When I started playing Fire Emblem Heroes, I did have good reasons to play it. I like the core Fire Emblem gameplay; and, actually, when I stop playing games in the series, it’s usually because the levels are getting too intricate. So, with that lens, Fire Emblem Heroes’ four-on-four levels are an active virtue: they shrink down the scale, so levels never get out of hand. Instead, the game focuses on tactical details; I appreciated that focus, and I learned more the more I played.

The other aspect of the game is the collection aspect. Which wouldn’t have done anything for me six months earlier, but after playing Tokyo Mirage Sessions, I was more than happy to see some of my favorite characters. (Though also a little disconcerted to see the differences in their presentation between the two games!) And sure, it’s fun to try to hope to get five-star characters, to level party members up, to try out temporary challenges.

But, at some point, I’d gotten enough five-star characters to fill out my team, I decided that it was going to be not worth it to me to try to get a team that was better on whatever metric I was looking at, and I felt that I wasn’t learning enough from the levels. So I stopped playing; and that was the right choice. But playing the game daily for a month and a half was a fine choice, too.