[ Content | Sidebar ]

move your dna

January 14th, 2018

Move Your DNA is the latest book I’ve read in order to try to understand how to move / position my body in a healthy manner, and it’s quite interesting in a way that, I think relates to Kegan’s stages of understanding. Because the main point of the book are that your body, even when it is doing things you don’t like (e.g. causing pain because of bulging lower back disks!), is responding as best it can to the circumstances you put it in. And, as a corollary, doing any one thing (taking an action, being in a position) over and over again, almost no matter what that one thing is, isn’t going to maximize health, since your body will be over-adapted to the forces that that context creates. (Again, my lower back problems, which I suspect were a result of my body’s attempt to deal with the amount of sitting that I did and the particular posture I used while sitting.)

And that’s all well and good, and it certainly makes sense. But it’s a sort of making sense that can easily lead in a Stage 3 direction: without the guiderails of Stage 4 systematicity, it’s too easy to interpret whatever sort of common sense actions you’re engaging in as an appropriate amount and type of shaking things up, without really understanding what’s going on. And, of course, it’s an answer that raises questions about the wisdom of following Stage 4 answers: if you’re not careful, those can end up warping your body in different ways!

 

For example: how much does Move Your DNA suggest that I should worry about the Gokhale method being bad because it instills new habits which could cause their own damage if followed regularly? I’m honestly not sure what the answer is to that; right now, though, I’m not too worried. Bowman isn’t agnostic about different positions, either, there are some that she likes more than others; and it feels to me like Gokhale’s idea of a desired neutral body position (which I have found very useful) matches reasonably well with Bowman’s starting points?

I’m less sure about some of Gokhale’s other recommendations, the ones that are designed to stretch your lower back in particular. They feel like they replace one set of external forces that our anatomy isn’t well adapted to with another set of forces that doesn’t match anything that would have been regularly encountered in the evolutionary environment, and that seems potentially dangerous if taken to an extreme. But the flip side is that, done appropriately, they feel to me like therapeutic exercises rather than something you’re supposed to stay in regularly for long stretches of time, and in that context they’re probably fine / good?

And certainly some of what the Gokhale method has revealed to me matches some of Bowman’s major points. E.g. Gokhale has some techniques to alter your shoulder positioning; they have definitely had an impact on my shoulder positioning, in a way that feels helpful to me, but more than that they have revealed to me how much I (and other people I see as I look around) have been trained to hunch and to position shoulders and rotate arms towards the front of their possible range of motion, which does feel off. And it’s actually still the case that, now that I’m aware of my shoulder positioning, my two shoulders don’t feel symmetrical; I assume that that’s a consequence of differential arm use. (Though who knows: I spend an awful lot of time typing, and there my positioning is symmetrical! At least I think it is…)

 

The other body positioning and exercise regimen that I have to compare Bowman against is, of course, Tai Chi. Bowman is a proponent of exercise, but she also cautions repeatedly that repetitive exercise (e.g. lots of running; especially on treadmills, but also on flat roads, though she likes it quite a bit more if you run on varied terrain) doesn’t bring the benefits that you’d hope. It conditions your body to expect the repeated context that that exercise provides, and while having a second repeated context added in isn’t a bad idea, it still leaves out a whole range of potential forces that your body could be adapted to.

I’d like to think that Tai Chi provides a large enough range of movements and forces on your body that it’s less susceptible to Bowman’s critique than some other forms of exercise, but who knows. The (Chen Lao Jia) first form has 75 parts to it; there’s a lot of repetition in there, but still, it’s asking you to a reasonable range of different things with your body? And the weapon sets add still more forces (e.g. the Dao with its forcing you to respond to the inertia that comes from its weight), and I’m finally experienced enough to be able to start learning the second form, which adds still more into the mix.

Though the flip side is that Tai Chi does have its repetitive practices; and, in fact, it sometimes takes those practices to extremes. (Which is the Chi part of the name: it’s a character 極 that means extreme!). For example, standing meditation, where you stand in place for, say, 20 minutes at a time, is something that my teacher recommends.

Arguably, though, Bowman’s book sheds an interesting light on standing meditation: being in one position for 20 minutes is going to have an effect on your body, but if you’re spending that 20 minutes focused on something else (typing away, say), then you’re not going to be paying attention to those physical affects or trying to shape them in a way that’s helpful instead of harmful. Whereas, with standing meditation, your physical positioning and understanding your body’s reaction to that is part of the goal, as is tweaking your positioning as your understanding develops so that you can guide your body into a positioning that helps you.

 

Hard to say; I don’t have nearly a deep enough understanding of the consequences of what Bowman is saying to be able to use it to seriously analyze other activities at a more-than-superficial level. And, while I’m curious about the book, it’s rich enough that really diving into it would take quite a bit of time; and I’m choosing to only budget so much time to that sort of physical experimentation; right now, Tai Chi is eating up (and, actually, slightly expanding) that budget.

But I do think that Move Your DNA is potentially a pretty interesting / important book? And it’s definitely good to get some encouragement to try opportunistically moving in different ways: I wrote a bit of this post while squatting instead of sitting in a chair, and I’m straying off of sidewalks a bit more. And one concrete effect that the book has had is that it’s had me notice my feet, and the signals coming through the soles of my feet more. I’m sure it’s really a culmination of the Gokhale book and Tai Chi, but when Move Your DNA started talking about feet as highly flexible and able to sense quite a lot if you let them, I realized that, actually, if I paid attention to my feet, I really could feel quite a lot through them and have them in a range of positions even inside my shoes.

Which wasn’t so much the case a few years back when I was experimenting with lightweight shoes: back then, I had a hard time really feeling like my feet were responding to the environment, and if I went too far in the lightweight direction, I felt like I was impacting my body in unfortunate ways. Maybe it’s time to give that a try again? Bowman actually has written a couple of books on that subject, too…

undertale

January 7th, 2018

Undertale is charming in a way that I found surprisingly refreshing: it’s telling a story, the story leads off with a major conflict but almost immediately shrinks to a personal level, and then the personal level gets a lightweight touch. And that doesn’t combine to make the game feel slight: it makes the game feel human.

And that’s all pretty remarkable, but I don’t have much to say about that aspect of the game. So, instead, I’ll talk about another design choice Undertale made: your ability to make it past enemies by empathizing instead of fighting.

 

I’d heard about that going into the game; I’d also heard that a pacifist route was quite difficult, and so for a first playthrough you probably shouldn’t go full-out pacifist? But it wasn’t that bad at the start: confusing, yes, because I got out of a battle but I didn’t get any experience so I couldn’t tell at first if I’d instead triggered some sort of escape mechanic?

Eventually, though, I realize that there were changes in the UI (yellow text, basically) that signaled that I’d triggered the ability to resolve battles peacefully; so yes, I was resolving conflicts, I wasn’t just avoiding them. I did wonder about the consequences of not gaining experience, but I figured I’d worry about that more when the time came.

Then I hit a boss battle, and Undertale seemed pretty insistent that talking wouldn’t work to get past that conflict; after trying a few times, I concluded that the game was telling the truth about that fight, so I switched over to fighting. Which wasn’t too bad; and, after that, I jumped from level 1 straight to level 5 (and from 20 to 36 hit points).

 

That was pretty clearly the end of a chapter of the game, and it seemed like a surprisingly large jump in my level, more than would be justified by a single fight. So at this point my mental model was that the game levels you up somewhat at the end of chapters even if you’re taking a pacifist route. Which makes sense as a design choice; hopefully the game won’t descend into the BioShock approach of making the two paths numerically identical, but being able to continue in the game as enemies get leveled up seems like a good idea.

My mental model was not, in fact, correct, however: there is actually a hidden pacifist approach to that boss. (Which annoys me a bit — I don’t really like that sort of active misdirection or needing to resort to GameFAQs — but only a bit.) But it took me quite a while to realize that I’d misunderstood what happened: my one instance of leveling up was after a special event, so I wasn’t sure what qualified as a chapter ending boss fight in my hypothetical model. Eventually, though, I’d made it to enough sorts of areas and fought enough unusually strong enemies that it was clear that I wasn’t going to be leveling up every so often, and a look at GameFAQs confirmed that I’d missed a pacifist approach to that boss.

Which made me a bit sad, if only because that one boss was a quite nice person, and probably the nicest person that you actually fight against, so I wish I hadn’t killed them! The thing is, though, having 36 hit points instead of 20 hit points was really useful: it let me make it past strings of normal enemies even as I approached the end of the game, and while, as the game went on, I did have to retry boss fights and use items during boss fights to boost my health, the boss fights remained doable.

So, ultimately, the game probably would have been much less pleasant if I had gone completely pacifist; and, if I hadn’t, my choice would have been to occasionally slaughter non-boss enemies even though that probably wasn’t necessary (which wouldn’t have felt right) or to fight one of the other bosses (which might still have been difficult, and which might have been less fun given that I actually enjoyed some of the boss fight interactions). Given all that, I probably accidentally mostly made the right choice (which, of course, the game strongly nudged me to do); I just wish that first boss hadn’t been such a nice person!

 

Ultimately, though, it turned out that 36 hit points wasn’t quite enough: I made it up to Asgore, one of the final bosses of the game, I was already on the edge at the end of my abilities, I was a little low on health items, and there wasn’t a shop nearby to allow me to easily replenish. I took a couple of swings at him, but I decided that I wouldn’t enjoy the combination of backtracking to get items and repeatedly fighting Asgore to improve my skills that would be required to make it past him. (In retrospect, I guess the third option would have been to level up by fighting some wandering monsters; for whatever reason, that didn’t even come to mind at the time.)

So I decided to give up then and stop playing. I almost always complete games that I start, so in general I would expect this to leave a bad taste in my mind. But, with Undertale, somehow that actually felt satisfying. Basically, I was playing as a kid going through a world of monsters: I was trying to talk my way out of trouble, and I’d made friends along the way, but ultimately, there were a lot of scary folks around. And also, the monsters had been treated quite badly by humans, and hence had reasons to want to attack those humans. So having the game come down to a final battle between a king who wants to do best for his people but who doesn’t feel good about killing a human kid and a human kid who’s gotten surprisingly far but is really over his head, and then having the king kill the kid, seemed to hit the right note of melancholic response to a situation that’s bad for everybody.

 

That’s how my playthrough of Undertale went, and it left me with a huge amount of respect for the game. Because the almost-universal default for RPGs (and, indeed, for a wide range of games) is for you to play as a mass murder; usually as a psychopath, occasionally as a person who expresses regret but then returns to slaughter. And I don’t feel at all comfortable with that these days.

One possibility for dealing with this is to allow a “good” route, and to provide mechanisms where the good route isn’t that hard, or, in the worst cases (BioShock again, though I wouldn’t call that game’s good route actually good in any serious moral sense), isn’t any harder at all than the bad route. But a choice like that is, from my point of view, a moral abdication: it’s letting you feel good about yourself without seriously grappling with the nature of the mass-murder route. And, of course, not letting you have that choice is a different sort of moral abdication: only allowing the possibility of choosing horrifically immoral behavior or not playing the game at all is not successfully confronting evil either.

Undertale, in contrast, confronts you with a choice but doesn’t sugar-coat it: you can be a decent person, and in fact there are a lot of other people in the game who are also decent people and who explicitly decide not to attack you even when told to do so. But, if you do that, the game is going to be quite a lot tougher; you can choose what sort of person you want to be, but that choice is going to have consequences, and you won’t like them.

 

I’m not going to present Undertale as the ultimate state of what I think moral choice in games should be like. For one thing, the RPG legacy of constant violent encounters is still there; for another thing, in the real world, decent behavior generally does actually have an impact, it’s just that the impact is unpredictable and plays out in a way that is often functionally quite different from the impact of bad behavior.

Though of course, in the real world as in Undertale, lots of people don’t benefit at all from good behavior in the sense that RPGs like to measure, with some clear external number (e.g. your bank account balance!) going up. So actually maybe Undertale is doing better than I gave it credit for in the previous paragraph: maybe it’s doing a good job of modeling the experience of structural oppression, where you’re constantly under attack, where you might be able to make it out of some individual instances of that attack with your soul intact by dancing around cleverly enough that you don’t get hurt too much, but where, ultimately, for almost everybody, the sum total of those assaults will wear you down too much for you to win. (Or at least Undertale’s pacifist route is behaving that way; the route embracing violence becomes much less realistic in that reading, because that route should also turn out difficult.)

 

Interesting game. (And charming, too, even though I didn’t talk about that much!) I’m very glad it’s out there, I’m very glad I played it, I hope it will spark thoughts in other game developers.

console upgrade cycles

December 29th, 2017

When listening to video game podcasts talking about the Xbox One X, they were generally disappointed because they didn’t see where the console fit in: there aren’t any games exclusive to the Xbox One X, and people who really want the best graphics will play games on PC, so who is going to buy it? And maybe they’re right from a business analysis point of view (but maybe not, I’m not at all convinced of the business acumen of the enthusiast press), but it felt weird to me to listen to that, because what those podcasters saw as problems felt to me like benefits.

Basically, what I want is to have a video game console family that takes its cue from the iPad / iPhone. So I don’t want big discontinuities: I want to be able to keep on playing my favorite games (or, for that matter, to be able to try older games for the first time) instead of, in the good case, having to rebuy games to play them and, in the bad case, not having access to certain games at all unless I plug in old hardware and hope that it still works. And I want to be able to do that without static hardware: if I’m replacing my hardware (which does wear out, after all!), I want to be able to get something with technology from the last year or two, instead of potentially being stuck with five-year-old chips. (Or, alternatively, having the choice to be able to buy two-year-old technology at a discount also sounds good; it’s not a choice that I personally normally make, but it makes sense in a lot of contexts.)

I could get that continuity with a PC, but that misses another aspect of the iPad / iPhone: I like being able to buy a game and have it just work, and to be confident that it’s not going to interfere with the running of the system as a whole or leave strange tendrils around if I uninsall it. And I also prefer the way consoles fit into my life: PCs feel much more isolating in that regard.

 

I’m not completely against discontinuities in console generations: if there’s a physical constraint that you want to change, then embracing that change is probably more likely to be successful than bolting it on. But there’s only one manufacturer who has the vision to (sometimes!) pull that off. So sure, Nintendo can keep on doing its own thing; but what I want out of Microsoft and Sony is incremental improvements based on PC hardware, based on the controller scheme that’s been working fine for the last two decades, and with an operating system that’s evolving to meet current needs for game distribution and online interaction. Or, in other words: more consoles like the Playstation Pro and the Xbox One S / X.

I am a little curious how this will play out from a business point of view, especially if Microsoft sticks with an incremental approach while Sony decides to jump to a Playstation 5 that doesn’t allow two-way compatibility across generations for most games. I’d like to hope that Microsoft will start quietly gaining ground from people who are buying a new console when a prior one breaks: they’ll spend more time having better options across multiple price/performance preferences.

Of course, what really matters to me is not having to rebuy a thousand bucks worth of Rocksmith songs and then having to level them up again…

nuclear war

December 28th, 2017

After Trump won the election, I was worried that the United States might actually slide into fascism. It’s a little over a year later, and, honestly, I’m still worried about that, but now I have a new worry: that we might have a nuclear war, with both North Korea and the United States launching successful attacks, probably with the United States firing first. And what especially worries me here is that I can’t actually see a good reason to think this won’t happen, and in fact it feels like it could happen literally any day now.

When worrying about fascism, I could at least be optimistic that political institutions would successfully stand against it, and that, as a last bulwark, protests could be successful. I’ve seen both bad and good signs in both of those potential protections against fascism, but at least there’s a case there.

With nuclear war, though: what exactly is the case that Trump won’t launch a missile against North Korea? That he’ll be horrified by the potential consequences? Trump has consistently demonstrated an almost total lack of that sort of reflection. That he’ll want to think hard about consequences of his decisions? Trump has shown a staggering lack of interest in getting real information, preferring instead to be led around by blowhards on TV. That he won’t respond emotionally to events? Over and over again, Trump has shown that he can be baited by a tweet. That his advisors will convince him otherwise? I’m sure that some of his advisors think that a nuclear first strike would be a horrific idea, but I’m equally sure that some of them think it’s a great idea. That somebody in the military chain of command would refuse the order? I see no reason to believe that. That the Constitution says that Congress has the power to declare war, not the President, so surely he wouldn’t be able to? Okay, I’m just including that one as a joke, and I can’t blame that one on Trump or the Republican party: we have decades of active bipartisan undermining of that portion of the Constitution.

There is, I guess, one potential source of optimism: Trump talks up stuff (whether past actions or future actions) an awful lot more than he actually does stuff. And, actually, now that I type that out, that really is a reason to be optimistic. Hmm…

 

If we launch a nuclear missile and North Korea fires back, what next? On a purely selfish level: I can imagine North Korea deciding that a West Coast US target would make sense, in which case San Francisco is a logical target. I assume I and my work are far enough away from San Francisco to survive that; I’ll just hope that North Korea doesn’t decide that Silicon Valley isn’t a better target. I don’t think there will be too many nuclear exchanges past that: I doubt North Korea has all that many bombs and missiles, so I guess it turns into a conventional war after that? (Which, sadly, is something we’ve seen a lot of.)

I was going to say that I’m also not worried about other countries joining in the exchange; and I do think that that’s true. (Though other countries might be targets of North Korean attacks: people in Seoul probably have the most cause for worry, and Japan as well. Or rather, the most cause for worry outside of North Korea itself.) But I don’t really see China being complacent about the United States taking military action on their border, against a state that China has supported in the past; I have no idea exactly how that would play out in practice, though.

 

Setting aside the affects on me locally, what about on the nation as a whole? Probably Washington D.C. is a more likely target; I don’t know if we’re sure that North Korean ICBMs can reach that far, but it sounds like they can? And if that happens, I have no idea what the consequences will be for our system of governance, but I can’t imagine they would be good. Even if North Korea chooses another target, though, we’ll have millions (hundreds of thousands if I’m optimistic) of deaths in the country; we’ve had a decade and a half of overblown reaction to a couple of thousand people dying in an attack, this would be much worse.

And, of course, we have a long-standing tradition of using wars and other forms of demonization as excuse to ram through favored political platforms, especially repressive ones; this would absolutely be no exception.

I’d like to think that Trump getting us into war would make more people realize that he was an awful choice as president; but I think more people would respond by saying that a war isn’t time to argue about the president. And I also think that the media and government would work together to downplay protests, even if they turn out to be massive.

 

That’s domestically; what about internationally? We’ve already lost a lot of international respect over the last year; but I’d have to think that a nuclear first strike would cause a lot more countries to treat us as actively dangerous. Which, I’m sure, third world countries already do: we’ve spent the past century and more going around the world overthrowing governments and installing puppet regimes. But if more powerful countries start seeing the US as a source of instability rather than stability, that feels like a state change? And if it leads to serious sanctions (which, I think, I would hope it would?), that would ratchet up the tension.

 

I dunno. I guess, typing this out, I’m actually a little more optimistic, because I really do believe that Trump is almost all bluster, that he himself doesn’t do much. Not that he didn’t, say, support the tax bill or trying to get rid of Obamacare, but Congress was driving both of those; he probably had more ties to the Muslim Ban, but even there, I suspect it was advisors driving matters in a way that they wouldn’t be able to drive a nuclear first strike so easily. So I guess this is the time to be grateful that we have a president who far prefers to be at his golf course instead of doing any actual work…

nier: automata

December 25th, 2017

There was a part a few hours into Nier: Automata where I got actively angry at the game. I was going through the desert, the robots were telling us not to hurt them, and 9S was telling me to ignore what the robots were saying: they were just repeating words without any meaning behind them.

To which my reaction was: seriously? I get it, we’re going to come to the realization that we’re actually the bad guys here, but can’t you be a little less heavy-handed about it? And maybe, you know, don’t force us to engage in mass murder while we’re coming to that realization?

Fortunately, the next two segments of the game helped me get past that point: the amusement park showed robots that weren’t attacking me, and while 9S was dubious about that, he was willing to accept that maybe slaughter wasn’t necessarily required. And after that comes Pascal’s village: robots who are explicitly and thoughtfully actively seeking out peace. So yeah, 9S’s behavior in the desert was heavy-handed, but the game at least made it past that fairly quickly.

 

It’s more subtle than that, though: the behavior of the robots in the amusement park and in the village are really quite different. The robots in the amusement park are peaceful (or, when not, are explicitly coded as broken by the other robots), but they’re peaceful in a creepy way. Most of the amusement park robots are repeating the same moves over and over, they’re acting, well, robotic. In contrast, the robots in Pascal’s village feel like people; or at least Pascal does, and the others at least feel like they’re moving in that direction.

So if the robots in the desert are as far away from the amusement park robots as the amusement park robots are from the village robots, then maybe 9S is actually correct: maybe they really are just saying words without having any idea what those words mean? Also, 9S has seen a lot of robots, so maybe he does have reason to believe that meaningless recitation is the norm for them.

 

Looping back one more time, though: in the real world, wars do indeed happen. And the people on the other side are people like you; people who believe that they’re fighting for the right thing, people who are scared. Or people who aren’t even fighting, but the war has come to where they live, and there they are.

Yet fight them our soldiers do. I can say that 9S’s behavior is ridiculous, but I live in a world (in a year!) where social structures (and not just in the military, not just in the police) evolve to emphasize seeing people not just as others, as different, but as Other.

So 9S’s behavior isn’t the unrealistic behavior: what’s unrealistic is me thinking that I can avoid that, that I can just choose not to take part. (That has certainly been a lesson of 2017: for example, how much the United States has been built on white supremacy, how present that legacy is, and how you can’t simply choose to step away from it even if you benefit from it and would rather not.)

Which, in turn, raises the question of how to model this concept, this mindset? Except that, of course, the answer is: video games model a mindset of othering, a mindset of the soldier following orders, all the time, they do so as their unquestioned default behavior. They tell you you’re the good guy, they tell you who the bad guys are, they present information to justifying that labeling while not actually letting you interrogate it, and then they put a weapon in your hand and tell you to use it. If you do, you’re a hero; if you don’t, you’re worse than not being a hero: you simply can’t play at all. So what made me uncomfortable about Nier wasn’t that it wasn’t letting me behave morally, it’s that it was making that question explicit.

 

I appreciate how Nier’s looping encourages me to rethink as well.

 

The robots show gradations of thought and consciousness, reflections of what it means to be human; the androids do as well. The extent to which they’re following orders versus responding to the details of the situation, for example: we see the Resistance members on Earth versus the soldiers based in the station, and even the various androids who have struck out on their own.

But also their range of responses: 2B in particular is not great at responding to social cues at the start of the game. Which, at first, I interpreted as her not having the same range of emotional responses (which then translates into a narrative of her learning to feel); but I’m now much less sure that that’s the best way to interpret her behavior. I think it’s a mixture of her being a soldier combined with her not reacting so fluently to certain interaction modes and cues.

 

You can imagine a lot of games made out of the aforementioned elements: in particular, you can imagine a game that would use this world to focus on different aspects of what it means to be human, and that either takes that in an anthropological direction or in an optimistic direction.

Nier didn’t make either of those choices, and I kind of wish it had. Instead, it gets more and more nihilistic as you progress from endings B through D; ending E holds out more hope, but ultimately it’s a very grim game. Admittedly, it’s been a very grim year, so maybe it’s the game that we deserve in 2017, but I don’t feel like the thoroughgoing bleakness of Nier’s outlook was teaching me much.

Maybe that’s me, though. There are, after all, bleak structural forces in our world: so perhaps Nier is illuminating them in a useful way? I could see that in the individual tragedies in the game, but I had a harder time seeing that in its structural bleakness.

best practices

December 14th, 2017

A quote from Anil Dash’s article about Fog Creek’s new project management tool, Manuscript:

Be opinionated: Manuscript has a strong point of view about what makes for good software, building in best practices like assigning each case to a single person. That means you spend less time in meetings where people are pointing fingers at each other.

Here is my opinion: if you want to talk about opinionated software (which, by the way, is a concept that I do agree with, even when I disagree with the specific opinions), then own it. Don’t start covering your ass in group legitimacy (before that sentence has even ended!) by saying that your opinions are actually a “best practice”.

Dash does, at least, try to explain why people would feel that individual assignment is a best practice, an opinion worth having. But geeze, that explanation: we’ll avoid finger pointing by making sure that each case has the name of the person you should point a finger at? How does that work exactly?

Don’t get me wrong: he’s got a coherent point of view. As far as I can tell, he believes in primarily optimizing for individual developer productivity. And yeah, if I preferred to work that way, I’d want to assign tasks to individuals, too. But say that, don’t talk in the abstract about best practices.

 

Though, looking at Manuscript’s feature list, I see no evidence at all that it’s actually opinionated software, so probably the “best practice” empty phrasing is closer to the truth. Take their section on Scrum:

Construct and plan iterations from existing cases. Create milestones for your backlog, sprints, doing and done — or any other structure your team or project needs. Customize each project’s structure to match your ideal Agile or Scrum workflow.

Followed soon by this wishy-washiness:

Estimates can be added to cases from the planning screen, and if you prefer story points, then you can switch between hours and story points too.

And there’s a Kanban section too. So: use Scrum, use Kanban, use your own homegrown process, use hour estimates, use point estimates. Do anything you want, we’ll be happy to support it! (At least as long as what you want to do doesn’t include pair programming, I guess.)

Dash’s article quotes a tweet with the following lament about Jira:

Why are you customizable to a fault, except in the ways I want you to be?

But isn’t that exactly what the above bits from the Manuscript feature list are promising as well?

 

Ah well; not a big surprise to be disappointed by marketing for enterprise task management software…

layton’s mystery journey and whackamon

December 11th, 2017

Layton’s Mystery Journey was actively disappointing. Following Layton’s daughter was a nice enough change of pace, I suppose, and the series is a good fit for the iPad; but the game didn’t have soul, and the puzzles weren’t enough for me.

For example, you start off by meeting a dog who tries to hire you to figure out why he can talk: but then another, more urgent case comes along, you do that instead, and the question of how the dog can speak never comes up again. And you have some boy who follows you around making puppy-dog eyes: I guess it’s still an improvement on the gender politics of the earlier games in the series, but only barely. The main other person whom you regularly interact with has their personality filled out in very broad, stereotypical strokes; all the other characters have one distinguishing feature and zero depth.

The puzzles are fine, but nothing at all new compared to other games in the series. The visual art isn’t awful, but it isn’t good: the dog always seems like he’s floating off the ground, and characters wave their arms or recoil in shock in ham-fisted ways. And breaking the game up into lots of different cases with only a vague hint at an overall story isn’t particularly effective plot-wise, and makes it harder for you to get to really like the city. So I think I’m done with the series unless something changes.

 

The other game I played recently on my iPad is WhackaMon. Which I started on the laptop, but it involves fast clicking on different areas of the screen, and doing that with touch is a lot easier. This game the only reason why I’ve ever logged into Facebook Messenger, and I’m certainly not going to continue to do so now that I’m done with the game, but if it’s the only way to play an Eyezmaze game, then I’ll put up with that.

Unfortunately, WhackaMon isn’t one of my favorite Eyezmaze games: too much clicking, not enough thinking, and not quite charming enough. Though there is some thinking involved in the clicking, and there is some charm in the standard Eyezmaze building up of a more and more settled area; it’s too bad that there’s not more thought involved in the building, though. And Facebook Messenger actively gets in your way: I accept some amount of being asked to spam your friends, but being asked to do so immediately after building a new structure is not only probably too often, it actively gets in the way of your enjoying that new structure.

Having said that, I’m glad I played it: I spent a pleasant enough three or so hours tapping on stuff and figuring out systems. And I’m certainly glad that Eyezmaze is continuing to make new games.

post-systematic flexibility

December 10th, 2017

David Chapman has, among other things, been writing about modes of approaching meaning, in a way that’s informed by Robert Kegan’s developmental psychology. He’s written a summary of this recently on one of his blogs, and he discusses it frequently on Meaningness (see e.g. this post and posts it links to), but I thought he had a particularly good discussion of it recently on the Imperfect Buddha podcast. (You can skip to about 22 minutes in if you want to skip over the discussion of the state of Buddhism in the west.)

He focuses on stages 3, 4, and 5 of Kegan’s model. Stage 3 is characterized by a focus on communal values, individual relationships, emotions, and experiences. Stage 4 is systematic: it accomodates complexity in a rigid way, by mapping it to a model. Stage 5 is meta-systematic: if you’re in stage 5, you’re skilled with dealing with interface between systems and reality, and can handle use that to handle vagueness while embracing precision and complexity.

 

I’m trying to come to grips with whether or not I think this is a helpful model. (And, if so, in what contexts it’s helpful, or how that help manifests.) For now, I’m having a hard time thinking about it in terms of an individual’s development as a whole, but it seems to me like a plausible match to how somebody thinks about specific aspects of their life?

For example, I’m a software developer who has spent some amount of time thinking about and experimenting with agile software development. So it feels to me like I can tell the difference between stage 3 and stage 4 use of agile: stage 3 agile is saying / believing that you do agile because that’s what cultural forces present as normal behavior, while if you’re asked what you do, you have some idea that agile = scrum and it means that you have standup meetings once a day, call each two weeks on the calendar a sprint, and store a backlog in Jira. (And a stage 3 agilist will do all of that while happily continuing to have separate requirements, design, implementation, test, and maintenance phases, and while constantly generating estimates and plans that are far more ambitious than what they actually get done in a sprint.)

Whereas a stage 4 practitioner will say that the phrase “we do agile” doesn’t make sense, because agile isn’t a methodology, it’s too vague for that. But they’ll have a precise idea of what it means to follow, say, Scrum or XP, and they’ll be skilled in following that precise model and helping teams follow that model.

Which, in that light, means that I’m probably not a fully stage 4 practitioner, because I’ve never been on a team that followed Scrum or XP as a whole, or that had a well-considered homegrown system that it actually stuck to. (Which doesn’t mean that I’m in stage 3, either, because I’m generally quite aware when teams aren’t following methodologies, either external ones or ones that they’ve written down for themselves.) But, if you go down from full methodologies to smaller practices, like test-driven development or refactoring, I can make a better case that I’m a pretty solid stage 4 practitioner.

And if we move outside of software development, I can tell a similar story: e.g. I’m quite sure that my Tai Chi teacher has an excellent systemic understanding of Tai Chi (and hence I also believe that it makes sense to talk about a systemic understanding of Tai Chi), I’m equally sure that I don’t, but I also feel like I’m learning relatively concrete facts and improving in ways that I can point to? So I’m consciously trying to start the journey towards a stage 4 understanding of Tai Chi, I just haven’t gotten very far.

 

Stage 5 is more of a mystery to me. One of the points of stage 5 is that systems are only models, and hence are always flawed. But the issue there is that there are multiple ways that you can get to a rejection of systems: you can take a stage 3 approach of not really thinking about them seriously; you can take a nihilistic approach (Chapman calls this “stage 4.5” and is pretty worried about it) of correctly understanding that systems are always imperfect models and using that as a reason to reject them; or you can take a stage 5 approach of appreciating the nuances of the boundaries between systems and reality. Which should mean that you can use the power of systems in contexts where they apply well, you can avoid them in contexts where they don’t apply well (or, potentially, switching to a different system that applies better there), and you can tell when you’re near the boundary, using the system to inform your actions but not to rule them, and potentially using your observations to update the system as well.

At least I think that’s what stage 5 means: but it also feels to me like my understanding of all this stuff is probably basically at a stage 3 level? Chapman sounds sensible when he talks about this, it feels to me like he’s getting significant value out of it and believes that it’s tied pretty well to other forms of thought that he finds valuable, but I can’t say that I’ve seriously tried to put the framework to use. So, ultimately, I’m mostly just parroting / cargo culting what he says, which (I think) is stage 3 behavior?

 

One feeling that I’ve had over the last few years: more and more, when making programming decisions (broad design decisions, narrow decisions about what to type now, decisions about how to segment my work while trying to go from my current state towards a desired future state), my mind is starting to associate weight to those decisions. And here, by “weight”, I mean that my mind literally associates certain decisions with something that feels heavier or more solid, whereas other decisions feel like more of a haze. Hmm, I guess weight alone isn’t actually all that’s going on in my internal perceptual apparatus: e.g. there are some that feel like pebbles, solid and reliable but also like small steps, some that feel like mist, where I don’t perceive any weight but I also don’t understand what’s going on, and some that feel like they’re crumbly terrain, actively and concretely dangerous to proceed along. So maybe it’s more of a combination of weight and texture?

If I wanted to try to tie that into this Chapman / Keegan model, maybe that’s saying something about the boundary between stage 4 and 5? The areas where I have these feelings are situations where I don’t just know how to follow a given system, I have a pretty good idea of what the specific consequences are of doing so or not doing so (or doing so in different ways). So that means that I’m getting a better appreciation of reality pushing back (the “interface between systems and reality” that I mentioned above): when a certain question is answered well within a given system, when I’m pretty sure a given system is accurately warning about something, when I’m on the edge of a system, when I’m pretty sure I should work within a different system, and when I just don’t know?

 

Hard for me to say: like I said, I don’t understand the theory very well. And, for all I know, I’d get as much from linking my understanding to any other random list, e.g. The Five Levels of Taijiquan. (Different numbers in that book’s levels, though!) And, don’t get me wrong, there are certainly areas where I’m firmly in stage 3: e.g. when reading Twitter I’m just as likely to react to events in a way that ultimately comes down to group membership as anybody else is. But it is nice to start to have a deeper sense of what substantial expertise might feel like…

paperclips

December 3rd, 2017

I guess I played Paperclips enough that I should write about it here? Or rather I spent enough time watching it in my browser, or I spent enough time being distracted by it, or something.

Paperclips isn’t the first cookie clicker I’ve played, but it’s the one I’ve played the most; I think it’s the only one I’ve made it to the end state of, and certainly the only one I’ve replayed. And the narrative, as slight as it was, was actually a rather good fit to the mechanics.

Mechanics-wise: it’s all about bare numbers, and the game helps you think about them by exposing derived information (rates, in particular). And there’s enough complexity that it’s not obvious what the optimal strategy is at any given point: you basically know what to do, but you have a couple of directions you can go when optimizing, and also you don’t know when the next deflation event (or, more rarely, cataclysmic change, e.g. a new currency introduction) is that will invalidate all of your current calculations. And, if I’d want to think about it more, there would have been more that I could have dug into: e.g. part way through the game you start picking a competitor in a robot prisoner’s dilemma tournament, and I haven’t figured out (either theoretically or empirically) which strategy is the best.

 

That’s the game play; but, ultimately, much of the time you’re just sitting and waiting for stuff to happen. (Maybe buying more production capacity every once in a while, but not in a way that makes a real difference.) And, most of the time, the game is even happy to play itself on autopilot: continuing to make more of the relevant currencies without needing explicit action.

So you could imagine having it run in a background browser window while you, say, write a blog post or something, checking in once every half an hour. I found that very difficult to do, however: there’s always something just around the corner, some slight reward for spending three minutes watching numbers go up and then clicking as soon as possible.

 

There is, fortunately, an end state to the game. Which gives you two options; one is to start over, with slight tweaks to the numbers; the other is to contemplate the void. I picked the first option the first three times I finished the game, and I have no complaints about having done so; after that, though, I picked the other option, and I’m glad it existed.

genre insecurities

November 28th, 2017

If you were to ask me for, say, a list of my top five favorite movies, I don’t know exactly what the full list would look like, but most of the time both Spirited Away and Pom Poko would be on there. Which, it turns out, I have somewhat mixed feelings about: even admitting that I don’t have a particularly thorough movie background, is a pair of fantasy anime movies that could reasonably also be labeled as children’s movies a place where I (a 46-year-old man) want to put my stake in the ground? Shouldn’t I prefer movies that are more thoroughly grounded in a range of life experiences?

The above, of course, isn’t any sort of case against holding those movies in very high esteem: as phrased there, it’s completely unsupported genre snobbishness. And I wouldn’t put up with that sort of snobbishness in any other art form: I grew up in a context that, say, valued literary fiction over science fiction or romance, that valued classical music over pop music, that valued a whole load of things over video games (to the extent that video games even existed while I was growing up), and I’m pretty confident in saying that those blanket valuations are ridiculous, that literary fiction and classical music are just different genres. I can still see the effects of that context in my psyche, but I can also consciously set it aside. (And, don’t get me wrong, it’s not like anybody told me not to read science fiction while I was growing up or to not listen to pop music when I went through that phase in high school. And, also, don’t get me wrong: if I were to make a similar list for music, classical music probably would be extremely well represented.)

 

Setting both anti- and pro-genre snobbishness aside, though: you can learn from any genre, so I’m sure I’ve got gaps in my taste that arise from my genre choices: I did actually read a fair amount of literary fiction in grad school, and it was productively different from what I’d been in the habit of reading. And there are also stereotypes that I see in some of my habitual genres that I’m actively unimpressed with: e.g. the “anointed savior of the world” trope I see in so many games and also in comics (both American and Japanese, both in print and animated forms).

Worry about that latter stereotype is probably what’s really going on in my psyche here: I do enjoy wish fulfillment, but I think it’s healthier for me personally if I don’t spend too much time diving into it. Instead, I’d prefer to have a healthy balance of art that focuses on the small scale, on the details of what exists, and on actual people.

Having said that, too much of a focus on small scale personal concerns can be associated with its own negative stereotypes that I’m equally dubious of: e.g. literary fiction about middle-aged men unhappy with their marriages and instead finding a match with women in their twenties. I don’t have any more respect for that sort of wish fulfillment than I do for RPG “savior of the world” wish fulfillment; but if we can step away from that to something that feels more like real interactions between real people (and, yes, with real problems), then that’s important.

But at any rate you can of course focus on details and on people in any genre. Returning to science fiction, Trouble on Triton puts you in the head of somebody so you can how he interacts with other people, what he wants from those interactions, the pain that he gets from that, the pain that others get from that, and the self- and outwardly-inflicted nature of the problems surrounding him; and the novel’s nature as science fiction lets it generalize those experiences in a way that clarifies by the distance of the setting.

 

I said above that I’d prefer to have a healthy balance of art that focuses on the small scale, on details, and on actual people; that’s true, but only half true. My relationships with my wife and daughter are both extremely important to me; and if art can shed light on that, that’s great. And work involves people too, of course; and I do care about my friends.

But, granting all of that: I’m not a people person. Also, a lot of the classic literary themes actually aren’t particularly reflective of my life: happy, stable marriages and careers aren’t in general the subject matter of great novels. (Not that our family doesn’t go through rough patches – this last year in particular has been quite a bit rougher for us than I’d like – but still.)

Instead, a lot of what interests me is trying to figure out systems: figuring out what code and computers are telling me, solving puzzles of one form or another in my spare time. Which doesn’t mean that I don’t like small scales and details, because as I get older I find more and more that listening to details is an excellent path into broader concepts. But still: figuring stuff out gets me going, and that’s going to inform my artistic choices. Not necessarily in a direct way, I don’t particularly want to read books featuring programmers, but in a metaphorical way, I want to read books where reading them feels like uncovering and making sense of a conceptual space that’s new to me.

 

I led off by bringing up Spirited Away and Pom Poko; this focus on systems and details is easier to see in Pom Poko, because it’s a message movie, in multiple ways. It’s about growth and the negative affects growth has on the environment, on animal life in the environment in particular. It’s about the process of change, focusing more on the loss that change entails but still allowing you to see the benefits. So there are conceptual spaces to explore here, and to test your understanding of via exploration of tradeoffs.

And Pom Poko certainly focuses on the details, and on people. (I mean, mostly on tanukis, but still.) How individuals react to change in different ways; how life continues in its pattens despite change. It does this without grandiosity and without catastrophizing at a broad level: ultimately, the tanukis lose their battle, but most of them survive and adapt nonetheless. Though many of them don’t survive: the movie doesn’t catastrophize, but it doesn’t pull its punches.

Spirited Away isn’t the same sort of message movie: it’s about a very capable girl who turns out to be friends with a river god. So, to some extent, it’s a bit by the numbers; but I do appreciate how its plot asks fundamental questions about what the concept of family means. Family as people you’re related to by birth, but also family as people who choose to care about each other.

 

Looking at the two together, though, clearly movies that draw on Japanese mythology press my buttons, at least if they do so with a focus on sprits and nature. Which I think is another example of what I was talking about above: enjoying the process of exploring a conceptual space that’s relatively new to me, just in a less abstract way than the intellectual themes I talked about earlier.

Of course, movies aren’t just vehicles for plot and themes: they’re something you see and hear. And both of these movies have bits that are visual spectacles: the entire bath house in Spirited Away has, as its job, to put on a show, and the parade in Pom Poko is really something. And, aurally: Joe Hisaishi is one of my favorite film composers, and Itsumo Nando Demo from Spirited Away is one of my favorite pieces of his.

 

So yeah, they’re good movies. I probably should branch out more (though, don’t get me wrong, I don’t spend anything like a majority of my movie time watching anime), but there’s something there. And there’s certainly nothing wrong with enjoying exploring lovingly crafted spaces…

her story

November 6th, 2017

(Spoilers for Her Story follow; if for some reason you just want to know my opinion and are thinking of buying it, I’m very glad I played it, so if you’re on the fence, give it a try.)

I am very glad to have played Her Story shortly after playing Tacoma: both games tell stories that feel a lot more familiar outside of games than inside of games, both use interactive techniques to good effect when telling their respective stories, but the interactive techniques and the subsequent effect on how I experience the stories are significantly different.

Tacoma feels like a copiously annotated story. That story unfolds over the course of three days, which you learn about by seeing six key points during those three days; and, during those three points, you can look at the story from a few different perspectives, and are presented with some specific pieces of textual information informing each of those points and perspectives. And there’s subsidiary back story available: extra scenes you can watch about each character, and physical spaces for the ship and the characters that you can inspect, some with further textual information.

Her Story also makes it clear that there’s a linear story going on, but instead of progressing through that story linearly, the game almost immediately allows you to navigate on your own. I’m not even sure what a good metaphor is for the experience: a crystal, with views from different facets? A palimpset, reconstructing a text? Or maybe the best metaphor isn’t actually a metaphor at all, just a description of what’s going on: you’re conducting a murder mystery, trying to piece together what happened from the clues that you come across (that you notice!) and from the unreliable subjects you’re interviewing.

 

In Tacoma, you could say that the game mechanics focused on perspective, reifying that concept in a changeable viewpoint on a three-dimensional (or, really, four-dimensional) space. In Her Story, in contrast, the travel occurs along a one-dimensional space; and that, in turn, means that the navigation alone is less interesting from a game point of view. So the game has you navigate via conceptual controls instead of thumbsticks, reifying those concepts in the form of search terms that allow you to dip into portions of the timeline in an unpredictable fashion.

Or at least it seems unpredictable from the outside; one of Her Story’s most impressive accomplishments is how it uses what seems like an unpredictable method for controlling how you navigate the timeline and nonetheless ends up with a story development that’s satisfying in a surprisingly traditional way. Because, when reading a novel (a mystery novel, perhaps), I start out getting a picture for the basics of the setting and the problem that it’s presenting; then I start understanding the possible solution space, and thinking about how it might unfold, and what surprises might be in store; then I come across some twists that lead to new levels of depth and predictions; and eventually it all comes together. And, somehow, I went through that same experience while playing Her Story, despite the player’s behavior being aleatory from the designer’s point of view.

 

(Here’s where the spoilers begin in earnest, for people who want to stay away.)

 

Concretely: I started out just trying to get a feel for the situation, assuming that I was trying to piece together the events that led to the murder. I searched words that seemed important in the initial interview segments, leaning a bit towards proper nouns.

I’m not sure exactly when I realized that there were two different women appearing in the interviews: I must have heard Eve speak a few times before I realized that she existed. I think it might have been when I heard the name of the midwife, searched on that name, and then heard the whole story about their birth? But at any rate I transitioned quite gracefully into a second act of the game, which mostly centered around learning how the two sisters grew up, but also (from a gameplay point of view) had me asking questions like which sister was speaking during which days.

At some point I happened across a clip where there was a guitar sitting on the table, with no explanation whatsoever. So then I had to search for the term “guitar”, which led me to the first part of the song, and then I quickly found the second part of the song. If I’m remembering correctly, this was the transition into the third act of the game for me, trying to understand the sisters’ points of tension with each other better, and also trying to figure out what happened with Hannah’s parents.

And then I learned about what had happened between Eve and Simon; and eventually about Simon’s death; by then I’d seen the vast majority of the clips, so after a bit more searching of random words I’d jotted down, I declared victory.

 

In other words: I experienced a very satisfying unfolding of the story, broken down into four coherent acts, with significant parts of the story remaining hidden for quite some time, only appearing once I had the context to appreciate them. And yet all of this came out of a game with a random access interface, driven by search terms!

I still don’t know how the game did that, and how much I got lucky. I imagine quite a lot of it isn’t luck: presumably there are key words that don’t occur in the initial clips? I’d certainly be interested in seeing a graph whose vertices are the clips and whose edges are words shared between clips; does that turn up clusters that are dramatically meaningful?

But of course it’s not just a graph theory puzzle, for a few reasons. If you search a popular term, you don’t see all the clips; so we’d have to reflect that in the graph. (And of course restricting the clips you see in that situation by time order means that, all things being equal, you get more Hannah and less Eve.) And people don’t search words at random: I’m sure I’m not the only person who gravitated towards names and other proper nouns at the start, and in general people are going to search for words that seem meaningful. Finally, people aren’t restricted to searching for terms they’d heard: e.g. I searched for “guitar” not because I’d heard the word spoken but because I saw one.

So, somehow the game manages to balance all those considerations and still help the plot unfold. And I think it does that without cheating; it does say something about one volume being corrupted, but it said that at the start of the game and still says that at the end of the game, so I don’t think the game has been been hiding anything from me, or at any rate that it hid anything that hasn’t remained hidden?

 

I say “I” above when talking about my experience with the game, but I wasn’t playing it alone: I was at the keyboard but I was displaying it on the TV. Liesl watched a fair amount, and Miranda seemed basically just as involved as I was: at a lot of key moments I was following Miranda’s suggestions for what to type.

The game worked very well in that mode: we could talk about what we thought was going on, Liesl and Miranda both noticed things that I didn’t (e.g. I think Liesl was the first person to notice the tattoo), and the words I searched were mostly words that had been spoken whereas the words Miranda suggested were mostly thematically appropriate ones that may or may not have been spoken recently. So, between the three of us, we jumped around more and saw more stuff; yay for games that support that sort of shared experience.

 

Her Story of the most interesting games I’ve played this year. I won’t say that I want to play a whole bunch of games using this mechanic, but maybe actually I do? Certainly it’s a reminder to not stay stuck in a rut; and it feels like there’s some sort of deep lesson in the game about how to guide players’ experiences without prescribing.

twitter 2017

October 26th, 2017

A couple of weeks ago, a #WomenBoycottTwitter hashtag showed up on my timeline. It appeared on a Thursday, encouraging people to stay off of Twitter the next day; I haven’t been feeling great about my Twitter usage all year, so I figured I’d use that as an excuse to take a day off and see what it felt like. And I did indeed succeed in staying off of Twitter that day: my reflexes had me still launching Tweetbot every once in a while, but I always exited immediately. So y’all didn’t get to hear the play-by-play of me somehow managing to lose my right AirPod in the grass in a tiny nearby park; and when I checked on the past day of Twitter the next morning, my feed was significantly blacker than normal, with some pretty reasonable critiques of #WomenBoycottTwitter. (Though those critiques left me in the clear: everybody agreed that it would be fine to have fewer white guys on Twitter.)

The boycott was really just an excuse for me rather than a well-thought-out moral conviction: like I said, I haven’t been feeling great about my Twitter usage all year, because it’s been eating into my life more than I’m comfortable. Mostly, of course, it’s because of the shit show that our current president is (and that our current congress is): it is not unusual to have a week go by where, every single day, even multiple times a day, there’s a breaking news story that would be the biggest political news story for a month in normal times. So I have this horrible combination of needing to feel caught up with the extraordinarily fast pace of news while knowing that whatever I learn about will make me feel worse.

The news cycle is the main reason why Twitter feels different in 2017 than in previous years, but it also feels like there’s been a volume increase. Part of that is related to the news cycle: there are various issues that are important this year in ways that they weren’t important to me in years past, so I’m following people who are experts in, say, health care or international relations. But also the Twitter essay has exploded this year, which means that there are interesting people who are posting a lot. Right now I’m only following 244 people on Twitter, which is the lowest my following count has been in years, but it sure doesn’t feel like my timeline is bare.

 

Also (and this one isn’t new to 2017): Twitter is a pretty nasty place. I mean, it’s not nasty for me personally, but it’s a vehicle for serious harassment, in ways that very much directly affect people’s lives, and a lot of that happens in directions that reinforces existing inequalities instead of being random. So: is Twitter a space where I want to spend my time?

Partly, Twitter is just reinforcing existing dominance patterns: I don’t have an option to spend my time in a world where, say, white supremacy or patriarchy isn’t a dominant force. But social media platforms make their own choices about how they want to react to this, what actions they want to take in response; Twitter’s choices have (tautologically) led to it being the sort of space it is. I’m sure this is a hard problem to solve without throwing away the (real and significant!) benefits that Twitter brings, but still, I’m not sure that it’s a place where I want to spend my time.

 

So: how to respond to all of this? I can wish that we had a different president, but wishing won’t get me very far. I can wish that people wrote blog posts instead of Twitter essays; again, wishing won’t get me very far. And I can wish that Twitter were less of a harassment shit show; not much I can do about that, either.

Ultimately, I have to figure out how and where I want to spend my time; and, once I’ve made that decision, figure out what changes in habits I need to establish to lead to my desired outcomes.

 

The easy answer is to say that I should give up on Twitter entirely. And that’s definitely in the potential solution space, but it’s not obvious to me that it’s the right choice. I really do have friends that I interact with via Twitter; I’m not entirely sure what I would lose by stopping those interactions, but I’m pretty sure I would lose something. (And, incidentally: there is very little chance that I will switch over to Facebook as a primary posting vehicle, that one isn’t in the solution space.)

And I really do learn things from people I follow on Twitter, too: over and over, Twitter has helped me learn about programming, about politics, about ways of thinking that are important to me. Having said that, if I switched time from, say, reading Twitter to reading books, I would also be learning something, so Twitter is potentially a loss from a learning point of view as well as a gain, but you can certainly make a case that some amount of Twitter usage is a net positive for my learning.

I don’t, however, see any real benefit in the need to keep up to date with politics on an hour-by-hour level. I don’t want to be disconnected from the horrors that are going on in our government, but learning about those horrors at 3pm versus 6pm versus a day later probably brings no concrete benefit. (And I can imagine stretching out the time scale further: unless I’m going to join a protest tomorrow or call my Senator or something, then being a week behind seems okay? Though there is some benefit in being aware of the magnitude of those horrors, and catching up with the daily helps with that.)

 

When I analyze the situation that way, it seems pretty clear that, at the very least, I’m checking in on Twitter too frequently. And it’s still possible that leaving Twitter entirely would be best for me; that, however, is less clear. So I should start an experiment with significantly less frequent Twitter usage: see if I can validate the hypothesis that that will improve my life, and see if I can get more information about whether quitting Twitter entirely would be a net positive or a net negative.

Of course, it’s easy to say that I’ll check in on Twitter less, but it’s harder to actually do it. (That’s actually one advantage to the idea that I should leave Twitter: deactivating my account would be an easy way to enforce that.) I think probably the best step for me is to have a goal to not check Twitter on my phone: that way, I won’t check on it while at work, while commuting, while walking Widget, which carves out large amounts of Twitter-free spaces. (I already don’t check it on my computer: so, the goal would be to only check it on my iPad.)

The downside of that, of course, is that, when I’m home, I have much better things to do than to check Twitter! So it would probably be better for me to, say, only check Twitter while commuting instead of only checking it at home. But that would require willpower to enforce, which is hard; whereas not checking it on my phone just requires deleting Tweetbot. (I can even leave the main Twitter client installed so that I can still post: I won’t be tempted to use it to actually read Twitter, because I’m a “complete timeline” sort of person.) And, hopefully, if I’m only checking Twitter a few times a day, it still won’t use up too much of my time, because I can read through hundreds of tweets fairly quickly (and I can throw stuff off to Instapaper if I see potential rabbit holes to go down): I think the issue is more the interruptions rather than the total quantity of time?

I guess the other option would be to leave Tweetbot installed and just move it off of my dock, down to some hidden folder. That would probably work too, because it would be enough to break the habit of checking it frequently? But I think I’ll start by deleting and seeing what the effects are.

 

So: Tweetbot is deleted from my phone, and Music has taken its place in my dock. (Messages, Safari, and Castro v. 1 are the other apps there, if you’re curious.) Which, symbolically, feels right: really, wouldn’t my life be better if I were spending more time listening to music and less time reading Twitter?

tacoma

October 15th, 2017

What impressed me most about Tacoma was how normal it felt, and how surprising in turn that normalcy was to me. The game is full of AR recordings showing you silhouettes of the crew members of the station that you’re investigating; and a couple of those silhouettes were noticeably pear-shaped. Which, when I first saw them, surprised me; but then that immediately raised the question: why am I surprised? None of the silhouettes were particularly abnormal compared to people that I’d encounter in day-to-day life; and actually those silhouettes are probably more representative of my day-to-day life than the body types that I normally see in video games!

(Of course, the answer is obvious: video games generally aren’t interested in presenting day-to-day life. They instead want to present a stereotypically idealize life, and for female characters in particular, that puts pretty drastic limitations on what body types are acceptable.)

And once I got past the surface: Tacoma paints a picture of a life that’s surprisingly normal on a day-to-day level, too. The crew isn’t a band of intrepid heroes on a mission to save the galaxy: they’re a bunch of workers (contractors, even!) who are trying to get by. Making a living doing a job that they seem to basically enjoy, but where they’re also clearly not the ones in power; but people who have a lot of other stuff going on in their lives beyond their jobs, some good, some bad, all mundanely personal.

Again, totally normal in day-to-day life; and actually also normal in other artistic media. If this plot were in a book, I wouldn’t blink an eye; games, though, generally stay far away from that sort of mundane slice-of-life approach.

 

Tacoma does have a bit of a bite in how it depicts that slice of life, though. The game takes place in the future, which means that it needs to extrapolate; and the extrapolation is clearly interested in the struggle between corporations and workers. The workers are contractors, but with long-term, repeaedly renewed contracts (which is already depressingly familiar in tech circles, though the IRS did start cracking down on that a few years back). And the mention of, for example, “Amazon University” suggests that the spread of corporate control across society has increased, and payment in terms of “loyalty points” shows that scrip has returned. (But hey, those loyalty points are probably more valuable than most stock options! :rimshot:)

Fortunately, one trend that apparently has reversed compared to the present-day United States is unionization: the union is there to at least try to fight for the workers. Again, something unusual: this is admittedly largely a sign of the sector that I work in, union jobs definitely still exist in the country, but I don’t hear unions talked about day-to-day much at all, and the percentage of workers covered by a union has declined dramatically.

 

So: Tacoma is telling a story that’s unusual for the medium in terms of how normal it is, and that focuses on labor issue in a way that’s unusual both for the medium and for the trend of the times. (Or at least for the trend of the last two or three decades; in the last couple of years, discussion of labor issues has actually gotten quite a bit more frequent.)

And it’s doing this as a video game, within the walking simulator genre. I’m not an expert in that genre by any means, but I like what Tacoma is doing with it. The replayable AR scenarios give you something to focus on, and to observe from multiple angles, as you follow different characters through the same scene.

These AR recordings provide a better solution to the NPC problem than I’ve seen in other walking simulators. You don’t have to pick up context exclusively from the environment; they don’t feel like movies, because you can control the position that you’re observing them from, and can fast-forwarding and rewinding as you please; and the screens that are available at various portions give them an extra texture. Also, having six characters to follow is a nice balance between letting you feel like you’re understanding a community rather than seeing one person’s story while avoiding spreading your attention too thinly: it’s definitely the case that each person’s story matters, but they also matter as a group. Not that I think this is necessarily a better or worse approach than, say, Edith Finch, but the AR recordings are a solution that works well and that is new to me.

The core plot is linear: sections of the station unlock in phases, and in each phase, the AR recordings are, conveniently, closer to real time. I can imagine a different game using the same approach of AR recordings but presenting them as a crystal, where they all were giving different lenses on the same point in time, with later recordings giving new insights that encouraged you to re-watch earlier ones. That’s not the choice Tacoma took; I’m a little curious to see if Her Story (which, conveniently, is going to be next month’s VGHVI game) will feel different in that regard.

 

The AR recordings aren’t the only plot/information/setting delivery device, though. In each recording, you get access to personal communications; each crew member has their own workspace with a desk that gives you more information about what they’re experiencing; and each crew member also has their own personal space. And then there’s the station as a whole, in particular with the common rooms as well.

Which is a nice balance of information delivery devices: significantly richer (and, to me, more pleasant) than the combo of audio logs plus textual infodumps than I’m used to. Also, there are a lot of objects to pick up; that turned out to be interesting because of how mundane the vast majority of them were.

Mundanity might sound bad, but it turns out that the quantity of mundane objects meant that the game got me thoroughly out of the adventure game mindset of “you must pick up every single object”. And the objects certainly weren’t all mundane: instead, they fit into a spectrum, with juice boxes and what not at one end, progressing to objects that mattered to somebody (jewelry, art works) but didn’t necessarily have a clear, explicit link to other parts of the exposition, then to letters and such giving a more direct bit of insight into what a person was thinking, and a few objects (keys, keypad codes) that are there strictly for gameplay purposes. So the result was that you could walk through the environment feeling like an (extremely nosy!) observer intsead of like somebody playing a game looking for the next trigger.

 

I’m quite glad to have played Tacoma; and I’m glad that Fullbright continues to push the genre forward, both mechanically and thematically.

(Side note: if it’s a tossup between you playing on Xbox and PC, you might want to choose the latter. I played it on the former, and while it was definitely playable, it may also have been the single laggiest/jerkiest console game that I have ever had the pleasure of experiencing.)

refining visionaries

October 5th, 2017

At Agile Open California this year, Volker Frank led a session about developing leaders within an agile organization. And it got me thinking: one way to lead is to see a possibility more clearly than anybody else, to describe that vision in a way that helps others see its beauty, and to help guide people towards a realization of that vision.

You hear about this in the context of the trailblazers leading teams in developing a revolutionary new product. But that’s not the only type of visionary worth celebrating: there’s also the power and beauty seen by those who have a vision of what’s present but latent in a situation. (Refining visionaries? Distilling visionaries?) Looking at a collection of code that’s effective in its own way but is harder to work with than you’d like, seeing an underlying structure that contributes to that code’s power, and then helping others see and bring out that structure. Working with a team that sometimes surprises everybody with what it gets done but that, more frequently, is stuttering and stumbling; helping the team figure out what’s going on during the good times that’s absent in the bad times; and helping them set up a context that reinforces the good times.

I don’t want to minimize the power of visionaries who open up new possibilities; but if you’re always looking for something new, you won’t be living with your visions long enough to do any of them well. And I suspect there are psychological consequences, too: if you’re always looking for the next thing, then that reinforces a “grass is greener” outlook. So, while being static has the risk of settling for something that’s bad for you, this latter, “refining” sort of visionary can help turn that relative lack of motion into a positive characteristic, actively finding and nourishing the good in wherever you are.

And these refining visions are one that agile practices reinforce. Most notably in the practice of refactoring, of course: you’re explicitly not changing the behavior of your code, you’re just making it better. Testing, too: tests are a way of reifying one aspect of your vision, helping specify the behavioral aspects of where you are right now.

 

Of course, the distinction between these two kinds of visionaries is hardly cut-and-dried. At first I was going to say that mathematicians and scientists are refining visionaries, for example, because they’re finding regularities and rules in examples present in the world, but that’s far too simplistic: I can’t characterize Grothendieck’s vision of a new approach towards the foundations of geometry as just a distilling of prior examples.

And the use of techniques in service of these visions isn’t cut-and-dried, either. I mentioned testing above in support of refinining vision; but agile practitioners also use tests to help move the behavior of code forward. One thing that does characterize agile methods, though, is its preference for small movements: incremental design, and delivering value continuously rather than discretely.

So, if an agile team is going to be looking for a single type of visionary, the sort of visionary that would help the most is something in between, but one that (compared to non-agile contexts) is relatively weighted towards the local, refining side. By all means, have a vision of a promised land off in the distance. But don’t spend your time living over there: spend your time figuring out what the next step is that you hope will lead in that direction. And, while making that next step, pay close attention to your center of gravity, and don’t let it shift too much on any single movement.

 

Probably better still, though, is for an agile team to have many visionaries on it, instead of a single visionary leader. Some have clearer visions of a new world, some look particularly closely at the local terrain, but all can work together to take that next step.

the last guardian

October 2nd, 2017

If you’d asked me a couple of years ago, I would have guessed that either The Last Guardian would never be finished or else it would come out as somewhere between a disappointment and a disaster. And I would have been wrong: The Last Guardian is a Team Ico game through and through, not least in what it shows me that I’ve never seen in a game before.

It certainly looks like a Team Ico game: like Ico, with the buildings that you wander through; like Shadow of the Colossus, with a lovingly rendered large creature that you clamor over. (Only one this time!) I suppose, if I had to compare The Last Guardian to one of those two, it’s more like Ico: you platform your way through buildings, you have a companion, and it doesn’t have the formal austerity of Shadow of the Colossus. But it’s really not particularly like either of them, because of the aforementioned creature, Trico.

 

And I’ve never seen anything like Trico before. Or rather, I’ve never seen anything like Trico before in a game: part of the miracle of The Last Guardian is how much interacting with Trico feels like interacting with a dog. Trico has its own motivations, its own interests: it wanders around playing, exploring. But, balancing that, the game also quickly sets up a pack dynamic, with the two of you very much focused on each other: you have to provide food for Trico right at the beginning and care after its wounds, and Trico quickly decides that you are its person.

So, despite the aforementioned exploration, Trico gets nervous when you’re out of its sight for any period of time (and I felt bad when I was away from it!), and if you’re in danger, Trico immediately and unquestioningly flies into action to protect you. This sort of dual, asymmetric responsibility is something I’m very used to with dogs: as the human, it’s your job to make decisions and do certain kinds of providing, but both of you look after each other on an emotional level, and you know that caring for you and protecting you is one of your dog’s (or your Trico’s) foremost cares.

 

The Last Guardian isn’t just the best pet togetherness simulator that I’ve ever seen in a video game, though: through those interactions, it gives a new lens on and a new solution to some areas where video games have traditionally stumbled. One of those is the puzzle box nature of interacting with NPCs: games are full of NPCs where, if you press the correct buttons, they’ll give you something, in ways which actually lead in pretty creepy directions when translated into real-life terms. (Romance options in games are particularly prone to this.) I end up being more impressed sometimes by NPCs in games that don’t give you something no matter what buttons you press, but even that frequently feels more like an acknowledgement of the problem than an honest solution.

But Trico didn’t feel that way for me. You can ask it to do things (just to come over at the beginning, more complex things later); Trico will usually do what you want, but not always. Frequently it’s wandering around, looking at stuff, doing its own thing, acting like a creature with its own internal motivations. And, when you’re in danger, Trico responds immediately (modulo one psychological barrier the game presents), without being asked, because that’s what you do when somebody you care about is being hurt: you go help them. (Similarly, after the battle was over, I’d immediately cuddle with Trico, check for wounds, and cuddle with it some more: it’s not that the game is forcing me to do that, it’s just that that’s what you do.)

So: how did the game succeeded so well in avoiding the puzzle box trap? Partly, of course, because of the care that they put into Trico: your interaction is the main focus of the entire game, and when such a talented team focuses on something like that, good things will result. But I also think that replicating the pet dynamic turns out to be a surprisingly good target: pets have enough of an internal life to be able to behave like their own creatures instead of state machines responding to inputs, but they’re simpler than humans, so the seams don’t show nearly as much. Also, it helps that the game establishes a core assumption that both of you care about each other very much, so certain behavior doesn’t have to be justified.

 

The other game concern that The Last Guardian sheds light on is violence. In most games, your character is a psychopath and a mass murderer; game context justifies that behavior, but almost never seriously interrogates it. In The Last Guardian, though, the violence is largely delegated to Trico: you sometimes knock down enemies, but ultimately Trico is a much more capable combatant than you are. And the game does interrogate that violence: partly (as the game goes on) in a way that I have seen in some games, by revealing the external forces that have made Trico what it is, but, much more importantly and rarely, by showing Trico’s reaction.

When I said above that I always cuddle with Trico, I said that the game isn’t forcing me to do that: that’s true (I think, I never tested it!), but it isn’t the whole truth. Because Trico seems genuinely shaken up after each battle: its reaction doesn’t (just) seem like an adrenaline high, it seems like a genuine discomfort with what’s happened, and a discomfort not just with what has happened to it but with what it has done.

You even see this in the special action that Trico has in the beginning, where you can use its tail to shoot lightning. Even when you use this ability to destroy environmental obstacles instead of to attack enemies, Trico doesn’t feel comfortable with what just happens: it’s a lens on the violent-behavior-as-shaped-by-external-forces scenario that I’m not at all used to seeing. (Imagine an RPG where, every single time you used a spell, you were shaken up by what you saw revealed in yourself!)

But Trico seems more bothered by its fights with the magical armored warriors than by its use of lightning. And this is a very real reaction, that you can interpret in many ways: maybe battles traumatize Trico because of the dangers to Trico, maybe battles traumatize Trico because of the dangers to you, maybe battles traumatize Trico because of what Trico sees in itself. Whatever the case, Trico needs comfort after every battle.

And, initially, the battles are mercifully rare. In the latter half of the game, though, they become more frequent; they’re never normalized, either to Trico or yourself, but you can see steps in that direction. Which, in turn, is its own lesson on the horrors of violence: you can see an important part of both of your cores getting buried, it feels like a loss, it feels like a scar, it feels like you’ll probably need therapy later.

 

The game does more: in particular, it weaves in context about what led to the current situation, how you and Trico got here and where you both came from. And then there’s the ending, which is the one aspect of the game that I question: it feels gratuitously dark to me, and I also neither like nor agree with what the ending says about your and Trico’s relationship. But, the (relatively minor) blemish of the ending aside, the game is a masterpiece: each Team Ico game shows me things I’ve never seen before, things that in retrospect were important at a fundamental level, and I’m not convinced that The Last Guardian won’t end up being the game of theirs that matters the most.

the legend of zelda: breath of the wild

September 17th, 2017

Breath of the Wild is, of course, a stunning game. And a surprising one, both in how it departs from Zelda tradition and in how I reacted to those departures. No more progressive unlocking of weapons/tools/areas, no more restricting those areas to your specific skill set / power level (at least after the first two hours of the game), no more mindlessly whacking away at mindless enemies.

Which could have been a problem for me: I like the well-crafted Zelda unlocking experience, and I don’t like scarcity mechanics in games. Also while there are games where I like to focus on skill, most of the time I play games for other reasons, and skill development has certainly never been my focus when playing Zelda games. So even in the opening plateau, I was a little nonplussed by the cold mechanic and its associated scarcity: I didn’t have a lot of hot peppers, the mountain wasn’t small (a least when starting the game; in retrospect, it was tiny!), and the bridge that I assembled to get across a frozen river was a little fiddly, especially given the clock ticking down from my cold resistence: do I really want to feel on edge like that?

 

Obviously scarcity didn’t turn out to be a problem in practice on the plateau, and I didn’t seriously exect it to be. But the scarcity mechanics continued over the course of the first quarter of the game: you don’t have a lot of weapon slots and weapons are constantly breaking, and you don’t have a lot of hearts either.

I turned out to get along with that surprisingly well, though. Partly because Breath of the Wild is a Zelda game: I had faith in the game’s designers to give me a fair amount of room to play with, instead of creating a game that only the hardcore would love. Partly because, for the two most clearly present scarcity mechanics, it was reasonably clear that scarcity wasn’t going to lead me into a pit that I couldn’t dig out of: I wish I had more weapon slots, but enemies drop weapons as well, so I didn’t see any reason to worry that I’d actually run out of weapons: it was more an issue of not having my favorites at any given time. (Also, I’d started the plateau without any weapons at all, so I had some confidence that I could recover!) And, as to hearts: sure, you might die, but that doesn’t set you back very far, so it didn’t take me too long to accept death as just part of the game.

Digging into dying a bit more: if you’re seriously worried about dying (and there are certainly monsters that you’ll run into that you’re not equipped to handle in the early game), then going around enemies is almost always a viable strategy: the open world means that paths are available, the lack of an experience mechanic means that you don’t get punished for not fighting. Alternatively, if you lean into fighting while low on hearts, then that gives you excuses to work on combat strategies, which one of the plateau shrines teaches you. So if you want a skill-based game, it’s there.

 

The upshot was that I rather enjoyed that first quarter of the game: I had to sneak around a little more than I would have liked (e.g. during the approach to the Zora domain), but I got into a decent amount of fights, and in general I didn’t feel that I was being prevented from exploring the world. And there were periodic pauses for me to learn more about the world (with two towns in particular as punctuation), and the non-combat shrines are almost entirely level-agnostic.

It took me longer than I expected to solve the puzzles in the Zora divine beast dungeon, but I managed that without walkthroughs, and I learned something concrete in doing so, that I had to be a little more systematic in my thinking about the tools that the plateau had taught me. And, in general, I was learning about the mechanics that the game provides, and the ways that those mechanics combine: one of the really remarkable aspects of Breath of the Wild is the way the game takes a relatively small number of systems, gives a relatively small number of variables within those systems, and combines them in as orthogonal ways as possible. (Leading to the chemistry system that cooking uses, or the way that you can survive cold either by wearing warm clothing, eating something cooked from a warm ingredient, or having a flame sword as your currently equipped weapon.)

I did, of course, feel underpowered when fighting the final boss in that first divine beast, and that’s one fight that you can’t avoid. But I used that as an excuse to work on my combat: in particular, he had some powerful moves that were fairly well telegraphed, so they were a good excuse to work on my dodge jump plus counterattack. It took me quite a few tries to beat him, but I succeeded, and felt proud in doing so.

 

The game shifted significantly for me after completing that divine beast: getting a heart container for completing the dungeon helps a small but (at that stage in the game) noticeable amount, but much more importantly, you get an ability that causes you to resurrect when you die with slightly more than full health. There’s a cooldown on that resurrection ability of course, but the combination of those effects meant that my health bar effectively almost tripled in length. I certainly won’t say that I stopped dying, but it was much rarer; also, by this point, I had a decent understanding of the basic systems of the game.

The upshot was that the game had changed from one of scarcity to a one where I could relatively confidently wander around. I certainly still wished that I had more weapon slots (heck, even at the end of the game I wished I had more weapon slots!), but I had enough that I didn’t have to worry about weapons breaking, it was just more of a nagging feeling that I wished I had slightly more weapon options, or that I could keep a torch on hand instead of hoping I’d find one when I needed one.

There were still environmental issues that I was more affected by than I’d like (e.g. I didn’t have good armor to protect against cold), but by that point I felt like I understood the systems: I could use food to deal with issues like that in the short term (and I’d cooked lots of food!), and I had faith that, if the game was going to make me spend lots of time in a specific sort of hostile environment, then it would give me better tools for dealing with that environment. (Presumably by letting me purchase armor; on which note, by this time, I was starting to accumulate a decent amount of money, instead of feeling like my purse was always running empty.)

 

At this point, the game became magical, or at any rate changed the tone of its magic. I understood the range of basic experiences the game had to offer me, and I could now make an informed choice between them: combat isn’t my thing, so I didn’t have to focus on it (though admittedly letting my combat skills rust hurt me when it came to the end of the game), but I really liked exploration, so I could spend resources improving my stamina meter, wear my climbing gear, and climb all over the place.

And: what a place to climb, what a world to explore. The world of Zelda feels organically alive in a way that, in my experience, has almost no parallel; Shadow of the Colossus, perhaps, but I’m not sure what else I’ve played that gave me this feel. Every hill feels like it’s in the right place, every tree feels like of course there should be a tree there, every river, every mountain.

And, like Shadow of the Colossus, every ruin; but, unlike Shadow of the Colossus, of course, there’s quite a lot of life. In the wrong hands, peppering the landscape with activities would feel forced: villages placed because we need a plot hub or a side quest, resources popping up every few steps because we need crafting, and so forth. And, the thing is, Breath of the Wild has all of that, but somehow it works! I have no idea why the resource gathering didn’t anger me the way it did in Dragon Age Inquisition, but it didn’t; I have no idea why the Koroks felt like an exciting magical part of the natural world instead of artifical stimulus designed to mask the designers’ lack of confidence in the inherent interest of the world, but it did.

Hmm, actually, I probably answered that last question as I was in the process of writing it: the fact that the basic geography of the world is so well done means that embellishments don’t come off as covering up flaws, because they aren’t. I’m not going to go all Christopher Alexander here, but I suspect that thinking about the world as a natural geography that gives rise to centers that plants and animals (including intelligent beings) successively embellish makes those embellishments a source of joy, despite their instrumental nature.

The contours of hills, mountains, and water in turn leads to trees (sometimes working together as peers, sometimes standing out on their own as punctuation), grasslands, and yes, even mushrooms that you can use for your cooking. And that not only makes for a natural home for animals, it also means that Koroks fit in not as rewards for the player but as creatures who have a special appreciation for particularly wonderful parts of the geography, or who simply like to play around with the world around them. And, of course, humans and the other species fit in as well: their roads fitting in among the contours of the land, their bridges, their stables, their towns, and the ruins where they’d once had a flourishing society. With Shadow of the Colossus, we saw what would happen if you punctuate a living topographical landscape with a few, high impact centers; with Breath of the Wild, we see what happens if instead we have the centers be much more pervasive, at many more levels of scale.

 

I’ve never played a game like this; and I’ve certainly never played a Zelda game like this. Though, having said that: Zelda at its best has brought life to its worlds in ways that few other series can match. Ocarina of Time treated its landscape and its locations with love and care as well; Majora’s Mask brought out the living rhythms of a city. Breath of the Wild is remarkable in the scale of the living world that it presents, and in the way it proceeds by combining systems; but of course there’s a lot of authoring in Breath of the Wild, too, we’re not talking Minecraft here.

And, for that matter: there is one aspect of the authoring of Ocarina that I actively miss, namely its music. Every time I passed by a stable in Breath of the Wild, I felt at home, and that’s entirely due to the power of Ocarina’s music still going strong two decades later. I’m not saying that Breath of the Wild made the wrong choice to not emphasize music as much: that’s a natural fit for the less-authored experience that it presents, and its sound design is very good in its own way. It’s just a reminder that, while Breath of the Wild feels to me a lot like a local maximum in the design space, it’s not the only possible local maximum: there are other ways in which games can nourish my soul.

I’m very happy that Nintendo is showing this year that they remain experts at navigating design spaces, in ways that bring delight and sustenance. I’d been worried that the company was in decline, but no longer: now I’m just glad to have the privilege of experiencing their works.

free speech and responsibility

September 3rd, 2017

In Germany, it’s illegal to display Nazi symbols and symbols of similar nationalist parties, and it’s illegal to be a member of such organizations. Which, as an American growing up under the influence of current U.S. free speech law and under the ACLU’s defense of Nazis in the Skokie case, mostly seemed wrong to me.

This year, though, even before Charlottesville but especially after that, I’d been less sure that Germany’s approach was wrong. I like general rules like free speech absolutism; but we’re talking about banning Nazis here, do I seriously think that that banning Nazis leads to worse outcomes than letting them march around?

 

Thinking about it a little more, I can come up with two basic arguments in favor of free speech absolutism. One is a belief in the power of the marketplace of ideas, combined with the existence of examples of ideas that I now support that were once considered morally and politically abhorrent by many. I very much think the Catholic Church was wrong to sentence Galileo for heresy; I’m not confident that I don’t have similar blinders myself, and I’m certainly not confident that our lawmakers don’t have similar blinders. And I do have some faith in the ability of humanity to move in a more moral direction; if you combine those two, then an absolutist approach to free speech looks pretty attractive.

The other argument is based on a combination of slippery slopes and power dynamics. If you ban X, then it’s tempting to ban things that are similar in some ways to X; and then my concern is what actually gets banned in practice starts to get strongly shaped by power dynamics. The concern here is that what starts as, say, a law against hate crimes against LGBT people turns into a law against negative speech based on sexual orientation turns into straight people using the law against gay people who say things that straight people don’t like. I can say that that’s ridiculous, that considerations about oppression have to take into effect structural power dynamics; but those of us with structural power have a strong vested interest in not having such considerations at the fore.

 

Both arguments come with responsibilities. Yes, in general I think that good ideas drive out bad, but it’s not a passive process: people have to fight for the good ideas, fight against the bad ideas. So, if Germany were instead to have adopted a pure free speech approach, a moral imperative would come along with that: keep the horrors of the Nazi regime in the front of people’s minds, to make it harder for people to pretend that it’s a less objectionable form of nationalism. (And, as it turns out: Germany has done this as well, they’re covering their bases.)

Whereas, for the slippery slope argument, the onus is on the other side: can you draw a bright line to set off the ideas that are so bad that they’re considered beyond the pale, to prevent more and more ideas from getting banned? Here’s my best candidate for a bright line: an idea that led your country into a war that it lost, and that in retrospect you feel was horrific from a moral point of view, is worthy of consideration to be banned. Because there aren’t going to be many ideas like that, and any idea that satisfies that criterion has been seductive enough to be actively extremely dangerous, and hence a candidate for extra measures against it.

 

So: even though I’m still pretty sympathetic to free speech absolutism, I can’t convince myself that Germany has made a bad choice here: what’s the concrete harm that comes from their banning Nazi symbols? But, of course, I’m not German, I’m American. Should we make the same choice?

If we were to ban symbols, the argument above would mean that those symbols should represent something horrific in our past that led to a war that we lost. (To be clear, I’m not making an argument in this post that the US should ban Nazi symbols: I think that’s worth considering, too, but that’s a war that we won, and I’m rather more nervous about winners treating their victories as inherently moral than I am worried about losers using their losses as an opportunity for moral reflection.)

And there’s one obvious candidate, though of course it’s not a perfect fit for the above criterion, because, depending on who you think of as “we”, it’s a war that we both won and lost. (I grew up in the North, not the South.) Namely, the Confederacy, and symbols of similar white supremacist groups, e.g. the KKK. Slavery is the United States’ great moral stain, and its aftereffects are not just still being felt but are actively being propped up a century and a half later.

 

On the one side have the First Amendment; I can imagine a version of the Fourteenth Amendment that would have taken a stronger stand against membership in white supremacist groups, but we didn’t make that choice. That means that this is a hypothetical argument, since our Constitution is on the side of free speech absolutism; and, as said above, that in turn imposes responsibilities.

Which, as a nation, we have failed abysmally in. The fact that I didn’t have to learn much about Reconstruction in school is, itself, a sign of that failing; my impression, though, is that Reconstruction had a lot going for it, but we gave up just over a decade into the process, and the South fell back very quickly into an extreme white supremacist society. Jim Crow continued until a full century after the Civil War ended; and, even now, we have a New Jim Crow with a shocking proportion of the African American population of the country under direct police control.

And, at the same time, explicitly pro-Confederate symbols and historiography are lamentably common. With that comes a recasting of the Civil War as being about “states’ rights”, without placing front and center the fact that the primary right that the Southern states were fighting for was the right to have slaves, a “right” which which is in fact horrifically wrong.

Also, Trump voters are apparently significantly more likely to think that white Americans are more discriminated against than black americans. Which is the slippery slope / power dynamic problem that I was worried about above; its presence here, though, suggests that I shouldn’t think about it primarily in the context of speech bans, because it’s happening anyways? Though I’m sure it would happen in the context of speech bans, too: so I guess it’s another argument for keeping the horrors of white supremacy present enough that we can’t sweep them under the rug.

 

We’ve fucked up as a nation, and are continuing to actively do so. And, at this point, I have no patience for arguments about freedom of speech and listening to all sides that treat this as an abstract question, divorced from our history and the ongoing affects of that history.

ipad orientations

August 13th, 2017

The iPad can be used in either portrait or landscape orientations. Different iPad interactions have different natural orientations: if the interaction involves video or (usually) images, then the natural orientation is landscape, because you want to fill up most of your field of vision. (So TVs are wider than they are tall.) But if it involves text, then the natural landscape is portrait, because that lets you focus on as much text as possible without requiring your eyes to scroll horizontally too much. (So books are taller than they are wide; and particularly wide text formats, like magazines and (especially) newspapers, frequently use multiple columns.)

That means that you might want to switch orientations depending on what you’re doing; Apple had the device switch orientations if you turned it on the side, but the initial iPad models also included a rotation lock switch for people who wanted a fixed orientation. As somebody who is interacting with text on my iPad the vast majority of times that I use it, I leave the rotation lock switch on (unless I’m watching a video): having the device switch to the wrong orientation when you hold it close to horizontal is REALLY FREAKING ANNOYING. Every once in a while, I try it with rotation unlocked; I usually last for about two days before giving up and going back.

Apple, however, decided that the switch wasn’t pulling its weight, so they got rid of it in recent iPad models. (There was also one period when they decided the rotation switch should act like a mute switch; that was just weird.) I assume this was at about the same time they added a control center with relatively easy access; and I agree, using the control center to turn off rotation lock isn’t horrible. But it’s more work than flipping a switch; also, I’m usually doing this when I start watching a movie, and that’s exactly when I don’t want something extraneous appearing on the screen. (Which Apple apparently doesn’t care about too much, as evidenced by the positioning and opacity of the iOS 7+ volume indicator.)

Nothing I can’t live with, but honestly: I think that, if the new iPad Pro models had added a rotation lock switch, that would have pushed me over the fence to buy one, I care about the rotation lock switch at least as much as most of the new features that they did in fact introduce.

 

More recently, Apple’s been improving its multitasking support for the iPad; and many multitasking features only work in landscape mode. And, with the iPad Pro models, they added a new keyboard connector; it’s on the long side of the device, which means that it only works with keyboards in landscape mode.

I can see why Apple made these choices: if you want to run two apps side-by-side, then you need horizontal room, and I can imagine people using the iPad for more serious work do need to do that. When I look at iPad-in-a-horizontal-case configurations, though, it just looks to me like a laptop; I’ve got a laptop, though, and that similarity just pulls me towards using a laptop. Whereas the iPad when held in my hand still feels different and magical to me: it’s a piece of paper that can turn into anything.

Which is fine, I guess? I still get lots of use out of my iPad as-is; and I imagine that, if I took up drawing, it would feel pretty magical doing that, too. So why worry about the fact that, when I’m typing, I’m drawn to a more traditional computer? And maybe that’s the answer.

But I’ve switched to a simpler text editor when writing blog posts; and that is a situation where the “magic sheet of paper” analogy feels to me like it would work well. And it’s a situation where I want to work in portrait mode: I want to see more rows of text in a narrower column rather than few rows of text in a wider column. (I don’t need side-by-side multitasking there, either; I occasionally switch to Safari to find a link, but I wouldn’t need Safari visible at the same time as a text editor.) I can even imagine that it would be useful to take the iPad off of the keyboard and hold it in my hand when editing, to have a physical shift that models the desired conceptual shift.

When writing blog posts, I am usually sitting in a chair, with the laptop on my lap; that could be an issue, in the past iPad keyboards that I’ve used haven’t really felt stable in that configuration. Maybe keyboard technology has improved since the last time I looked; but maybe that’s another sign that I should just stick with a laptop.

 

Or maybe I’m looking for a solution to something that’s not a problem: laptops work great for me for writing, iPads work great for me for reading. I just hope that Apple doesn’t keep on going farther in a direction that emphasizes landscape over portrait: Apple Maps has one design decision in particular that makes very little sense in portrait mode, which makes me worry that they just don’t care about portrait mode iPads these days, especially iPads that are locked in portrait mode instead of flipping orientation as you rotate them.

Then again, people like to worry about Apple not caring about this or that any more; most of those worries end up not happening, and most of the time, when they do come to pass, the outcome turns out to be better anyways. So I shouldn’t spend too much time worrying about it…

what remains of edith finch

August 6th, 2017

I wish I had something coherent to say about What Remains of Edith Finch: it’s a rather striking game, I just can’t put my finger on why?

Which, maybe, is a reflection of the game itself: it’s more a collection of little games than a single game itself, so why should I expect myself to be able to write about it coherently? We were talking about it last week in the VGHVI Symposium; coming in, if I’d thought about it much I would have labeled Edith Finch as a walking simulator, but once you get past the introduction, that label really doesn’t fit: the walking simulator part of it is a frame story, the internal games built on ancestor’s stories are foregrounded much more.

I actually wonder if the initial story is intended to explicitly play with that concept: Edith Finch isn’t a walking simulator, it’s a scampering-along-branches simulator, a flying simulator, a slithering simulator! (There are a lot of control schemes in the game.)

 

Another question which the first story explicitly asks is: how much of what you experience is real, how much is a hallucination or otherwise imagined? To be honest, that question is not entirely to my taste: I like works of art that don’t put boundaries between the realistic and the fantastic, and when confronted with such a work (Totoro, say), I take it as it is: it generally doesn’t cross my mind to even wonder how I should be interpreting the fantastic segments in light of the non-fantastical aspects of the world. Though that initial story is somewhat of an outlier in that regard in Edith Finch; I’m happy to see that story as a source of questions for people who want to approach the game in a mood of figuring out what really happened in the situations represented by the stories we see (and, for that matter, what really happened in the family outside of the stories), without emphasizing the question so much to people like me who aren’t in the mood to grapple with such questions.

Which reinforces my hypothesis from before: the game encourages an impressionistic approach, throwing off handholds that you can choose to grasp or to leave behind, that you can choose to link or to let stand alone.

 

To be clear, that doesn’t mean that there’s not real substance in the Edith Finch. It touches on some pretty serious subjects; and some of those subjects are ones that, frankly, are ones that I’m not entirely sure I want to spend too much time confronting directly in art this summer. Sometimes, that means that I’m seeking out art works that avoid those topics; sometimes it means that I’m engaging with art works that confront them more directly and wishing that I hadn’t.

But Edith Finch’s more oblique approach has a real virtue for me: it approaches subjects lightly, making those subjects available should I choose to engage with them, but also letting me gracefully skirt around them as I choose, acknowledging their presence but letting me keep as much detachment as I wish.

 

It’s a very impressive second game. The Unfinished Swan had a neat mechanical idea at its core, but while I was glad that it was trying to approach a serious theme, I wasn’t so sure about the way it approached that theme or even the choice of them itself. Edith Finch shows that neither the mechanical inventiveness nor the desire to confront real issues was a fluke; with it, I think the studio is really starting to put something together.

open offices

July 31st, 2017

Over the last week, I saw several attacks on Apple’s new offices, responding to information from this Wall Street Journal article by Christina Passariello: a Six Colors article by Jason Snell; a Daring Fireball (John Gruber) link to Snell’s article plus a, uh, smug follow-up; and a take from Anil Dash.

What surprised me was the definitiveness with which these takes asserted that open offices are bad: for example, Dash says right up front in his headline that open offices are “something their programmers definitely don’t want”. And the reason why this surprised me is that the intellectual tradition about software development that I’ve found most informative comes to the polar opposite conclusion, that shared working space is good and individual offices are bad; and my personal experiences also hasn’t backed up the idea that individual offices are clearly superior for programming. So, while I don’t expect everybody or even most people to agree with me either intellectually or in their lived experience, seeing multiple takes claiming that it’s obvious that the opposite view is correct was a reminder of how different the worlds are that different people live in.

But hey, maybe things have changed over the last fifteen years, or maybe I hadn’t thought through the beliefs that underly my assumptions. So I figured that it’s a good excuse to write up where I’m coming from. Note, though, I am (mostly) not saying that a) people are wrong to not prefer open offices, b) open offices are a good fit for Apple, or c) Apple is doing a good job with open offices. I’m mostly just interested in sketching out the underlying assumptions behind the two points of view, to understand what is underpinning each of them.

 

With that preamble out of the way, I think this sentence from Snell’s piece is a good place to start:

Sometimes I think people who work in fields where an open collaborative environment makes sense don’t understand that people in other fields (writers, editors, programmers) might not share the same priorities when it comes to workspaces.

I’m not a professional writer or editor, but his statement there feels true to me for those fields; as a programmer, however, that statement felt bizarre. When programming, I’m working with a group of other people to produce a piece of software that I couldn’t come close to producing by myself and where I don’t want outsiders to be able to tell which parts were done by which people; to me, programming is a quintessentially collaborative field. (Yes, I realize that solo software projects exist, I’m not talking about those.) So why wouldn’t we want our environment to reflect that collaborative nature?

 

The software development methodology that I feel has worked this line of thought out the best is eXtreme Programming (XP). XP is very focused on breaking down boundaries within a team: for example, code is owned by all of the developers on the team instead of having individual developers own different parts of the code. XP also promotes fast feedback: short cycles even within your daily and weekly development rhythms, frequent releases, and frequent back-and-forth between the development side and the product side of the team.

There are a few reasons for the focus on shared ownership. One is that nobody has a monopoly on the best ideas, even in an area of the code that they know very well; so let everybody contribute. Another is that it allows ideas to pollinate, with an idea over here bearing fruit over there. A third is reducing risk: you can’t reliably figure out in advance which ideas are going to really catch on, and if you want to be able to follup on the successful ones, you want as many people as possible to be able to help; also, team composition changes, and you don’t want to be screwed over if somebody leaves the team. (This is gruesomely known as maximizing your “Bus Number”: the largest number of people who could be hit by a bus and have your product survive.)

As to fast feedback: you don’t really know how a decision will turn out (whether a micro one, like a function name, or a macro one, like a new product feature) until the decision has borne fruit: so get to that state as quickly as possible! A key point here is that product development speed isn’t necessarily the best metric: going very quickly in the wrong direction, without being able to course correct for weeks, is going to turn out less well than going at a more measured pace but being able to course correct multiple times a day.

 

As a result of this, XP explicitly recommends that the entire team (not just programmers, product people as well!) sit in a common space. From a fast feedback point of view: you can get design feedback (whether from another programmer or from a product designer) most quickly if they are literally right there next to you. And yes, that level of proximity really does make a difference: any physical distance or lag in response time noticeably increases the chance that a programmer will go ahead with what makes the most sense to them, instead of involving somebody else, I’ve seen this repeatedly.

And, from a shared ownership point of view: sitting together obviously has symbolic value. But it also means that there’s no barrier to to people working together impromptu as they discover that that’s appropriate; and it means that the natural location for design artifacts (whiteboard scribbles and the like) is in a shared space. Also, overhearing conversations means that you’ll learn something about code that you might be working on next week or even later in the afternoon; or you might overhear a conversation where you realize that you have something of value to contribute, and you can jump in.

 

The flip side of that ambient conversation is that it’s noisy, it can make it hard to concentrate. One way that XP attacks this issue is through pair programming: it turns out that two people working together can tune out outside noise (while not completely disconnecting from their environment) better than one person working solo. Also, it turns out that two people, when interrupted, can get back to full speed on their task more quickly than a single person can, because they can leverage both of their partial mental states.

And pair programming helps with the other goals that I mentioned above. It obviously helps with shared ownership, not only by making a symbolic statement but by giving a high-bandwidth route for knowledge sharing. It even helps in a more subtle way: one surprise that I had when I first started Pair Programming was that, when working with somebody else, when we got to a thorny bit, it would take us about 10 minutes to say “we should ask X for advice on this” in a situation where, when working alone, I’d probably bang my head against that same issue for an hour. And, as to fast feedback: the fastest feedback is from somebody who is in the thick of the problem with you, and pairing largely eliminates the need for a separate code review step because code reviews are instaneous.

There are other XP techniques that help with working in shared spaces, too: I’ll call out test-driven development in particular as helping minimize the negative impacts of interrupts, because it encourages you to work in a way where, at any given point, you have one very clearly stated next micro-problem that you’re trying to solve.

 

XP is a couple of decades old at this point, but I don’t think anything I’ve written above is less applicable now than it was when XP was being created. And, in terms of newer software development trends, I want to call out DevOps: more and more of us are working in a world of cloud software operated by the same teams that are developing it.

And the last thing that I want in a DevOps world is individual code ownership, with people working in isolated offices. In those (hopefully rare!) situations where something is going wrong, I want as many people as possible to swarm on the problem, attacking it in meaningful ways from different points of view, getting it fixed as quickly as possible. And it’s really hard to do that if those same people haven’t all worked together on the software in meaningful ways in non-crisis modes.

Also, from a personal point of view: if I’m on vacation, I want to be on vacation, which means that the last thing that I want to have happen is for me to be the only person who can fix a problem in a piece of code. (Or, if it’s somebody else on vacation, the last thing I want to do is to have to choose between a bad situation for our customers versus my coworker having their vacation interrupted!) I strongly advocate against individual ownership in a DevOps situation, and shared space is really helpful.

 

So, to my mind, that’s what open offices are optimizing for: collective ownership and fast feedback. Whereas individual offices are optimizing for concentration: the ability to get into flow, and the ability to hold complex problems in your head at once.

And those are obviously good things! But I don’t see them as unalloyed good. Flow is great, it helps you work at high speed; the main question I have is whether that high speed has you going in the best direction. (And also, this is an area where pair programming helps as well: pair flow is a thing.) And, if you’re working on something that’s inherently complex, then yeah, you want to be able to hold it in your head; but better still to get that task done while making it less complex, which is where incremental progress, test-driven development, and refactoring come in.

At any rate: I think that both points of view are coherent ones, and can be carried off well. As a development team, pick what you want to optimize for; as an individual, pick what matters most for you; and then make it work in the context you’re in. You don’t have to carry out either plan in all of its force for it to work, either: for example, while in general both theoretically and in my lived experience I prefer the XP ideas, the truth is that I’ve spent very little time pair programming over the course of my career, and it’s been okay, I’ve still gotten a lot out of shared ownership, incremental development, test-driven development, etc. (And I’m open to the possibility that I would be a more effective programmer if I spent more time pair programming.)

 

A postscript on the Apple-specific questions here. First, I have no idea if Apple is doing a good job with their open offices; looking at the pictures, I can see spaces that look like they’re plausibly a good size for a single development team, but who knows, and I also don’t know whether those glass walls would mean that you’re constantly being distracted by other teams or if they would end up a welcome source of light. And I have no idea how representative the few photos in that Wall Street Journal article are of the campus as a whole.

In terms of Apple’s culture: I’ve never worked there or spent a lot of time talking to people who do work there, so I have the farthest thing from an informed opinion; Snell and Gruber have a lot more info there. (Though at least I do work as a programmer, not as a writer!) But, honestly, I’m dubious of open offices succeeding as a general rule in Apple’s development culture: this is the company that publicized the notion of Directly Responsible Individual, which is pretty much the opposite of the collective ownership approach that leads to open offices. (And I’ve heard multiple anecdotes about specific pieces of software been written by individuals, too.)

So if I were in that sort of culture, and if I knew that my neck was on the line for some specific piece of code, then yeah, I might want to spend time in my office working on that code instead of talking to other people: it might not turn out as well, I might make mistakes without realizing it, but they’d be my mistakes. And I wouldn’t be able to help other people as much; that would make me sad. So, all things being equal, I’d prefer not to work at a company like Apple that loves the idea of DRIs, so I might sort myself out of Apple.

I am curious how much the above still holds in current Apple, though. For one thing, Tim Cook seems a lot more focused on collaboration than Steve Jobs seemed to have been; maybe that’s filtered down through the company. (Though I haven’t heard about the DRI concept going away.) For another thing, Apple’s software has changed with the time: they run a lot more services than they used to (which, as per my DevOps comments above, says to me that shared ownership is the right approach), and clearly their OS development is much more incremental than it was a decade ago, with a regular yearly cadence and with significant changes appearing even in point releases. So it wouldn’t shock me if there are increasing numbers of software development teams within the company that prefer open working spaces.

 

Some Twitter thoughts from others that struck me: