[ Content | Sidebar ]

ipad orientations

August 13th, 2017

The iPad can be used in either portrait or landscape orientations. Different iPad interactions have different natural orientations: if the interaction involves video or (usually) images, then the natural orientation is landscape, because you want to fill up most of your field of vision. (So TVs are wider than they are tall.) But if it involves text, then the natural landscape is portrait, because that lets you focus on as much text as possible without requiring your eyes to scroll horizontally too much. (So books are taller than they are wide; and particularly wide text formats, like magazines and (especially) newspapers, frequently use multiple columns.)

That means that you might want to switch orientations depending on what you’re doing; Apple had the device switch orientations if you turned it on the side, but the initial iPad models also included a rotation lock switch for people who wanted a fixed orientation. As somebody who is interacting with text on my iPad the vast majority of times that I use it, I leave the rotation lock switch on (unless I’m watching a video): having the device switch to the wrong orientation when you hold it close to horizontal is REALLY FREAKING ANNOYING. Every once in a while, I try it with rotation unlocked; I usually last for about two days before giving up and going back.

Apple, however, decided that the switch wasn’t pulling its weight, so they got rid of it in recent iPad models. (There was also one period when they decided the rotation switch should act like a mute switch; that was just weird.) I assume this was at about the same time they added a control center with relatively easy access; and I agree, using the control center to turn off rotation lock isn’t horrible. But it’s more work than flipping a switch; also, I’m usually doing this when I start watching a movie, and that’s exactly when I don’t want something extraneous appearing on the screen. (Which Apple apparently doesn’t care about too much, as evidenced by the positioning and opacity of the iOS 7+ volume indicator.)

Nothing I can’t live with, but honestly: I think that, if the new iPad Pro models had added a rotation lock switch, that would have pushed me over the fence to buy one, I care about the rotation lock switch at least as much as most of the new features that they did in fact introduce.

 

More recently, Apple’s been improving its multitasking support for the iPad; and many multitasking features only work in landscape mode. And, with the iPad Pro models, they added a new keyboard connector; it’s on the long side of the device, which means that it only works with keyboards in landscape mode.

I can see why Apple made these choices: if you want to run two apps side-by-side, then you need horizontal room, and I can imagine people using the iPad for more serious work do need to do that. When I look at iPad-in-a-horizontal-case configurations, though, it just looks to me like a laptop; I’ve got a laptop, though, and that similarity just pulls me towards using a laptop. Whereas the iPad when held in my hand still feels different and magical to me: it’s a piece of paper that can turn into anything.

Which is fine, I guess? I still get lots of use out of my iPad as-is; and I imagine that, if I took up drawing, it would feel pretty magical doing that, too. So why worry about the fact that, when I’m typing, I’m drawn to a more traditional computer? And maybe that’s the answer.

But I’ve switched to a simpler text editor when writing blog posts; and that is a situation where the “magic sheet of paper” analogy feels to me like it would work well. And it’s a situation where I want to work in portrait mode: I want to see more rows of text in a narrower column rather than few rows of text in a wider column. (I don’t need side-by-side multitasking there, either; I occasionally switch to Safari to find a link, but I wouldn’t need Safari visible at the same time as a text editor.) I can even imagine that it would be useful to take the iPad off of the keyboard and hold it in my hand when editing, to have a physical shift that models the desired conceptual shift.

When writing blog posts, I am usually sitting in a chair, with the laptop on my lap; that could be an issue, in the past iPad keyboards that I’ve used haven’t really felt stable in that configuration. Maybe keyboard technology has improved since the last time I looked; but maybe that’s another sign that I should just stick with a laptop.

 

Or maybe I’m looking for a solution to something that’s not a problem: laptops work great for me for writing, iPads work great for me for reading. I just hope that Apple doesn’t keep on going farther in a direction that emphasizes landscape over portrait: Apple Maps has one design decision in particular that makes very little sense in portrait mode, which makes me worry that they just don’t care about portrait mode iPads these days, especially iPads that are locked in portrait mode instead of flipping orientation as you rotate them.

Then again, people like to worry about Apple not caring about this or that any more; most of those worries end up not happening, and most of the time, when they do come to pass, the outcome turns out to be better anyways. So I shouldn’t spend too much time worrying about it…

what remains of edith finch

August 6th, 2017

I wish I had something coherent to say about What Remains of Edith Finch: it’s a rather striking game, I just can’t put my finger on why?

Which, maybe, is a reflection of the game itself: it’s more a collection of little games than a single game itself, so why should I expect myself to be able to write about it coherently? We were talking about it last week in the VGHVI Symposium; coming in, if I’d thought about it much I would have labeled Edith Finch as a walking simulator, but once you get past the introduction, that label really doesn’t fit: the walking simulator part of it is a frame story, the internal games built on ancestor’s stories are foregrounded much more.

I actually wonder if the initial story is intended to explicitly play with that concept: Edith Finch isn’t a walking simulator, it’s a scampering-along-branches simulator, a flying simulator, a slithering simulator! (There are a lot of control schemes in the game.)

 

Another question which the first story explicitly asks is: how much of what you experience is real, how much is a hallucination or otherwise imagined? To be honest, that question is not entirely to my taste: I like works of art that don’t put boundaries between the realistic and the fantastic, and when confronted with such a work (Totoro, say), I take it as it is: it generally doesn’t cross my mind to even wonder how I should be interpreting the fantastic segments in light of the non-fantastical aspects of the world. Though that initial story is somewhat of an outlier in that regard in Edith Finch; I’m happy to see that story as a source of questions for people who want to approach the game in a mood of figuring out what really happened in the situations represented by the stories we see (and, for that matter, what really happened in the family outside of the stories), without emphasizing the question so much to people like me who aren’t in the mood to grapple with such questions.

Which reinforces my hypothesis from before: the game encourages an impressionistic approach, throwing off handholds that you can choose to grasp or to leave behind, that you can choose to link or to let stand alone.

 

To be clear, that doesn’t mean that there’s not real substance in the Edith Finch. It touches on some pretty serious subjects; and some of those subjects are ones that, frankly, are ones that I’m not entirely sure I want to spend too much time confronting directly in art this summer. Sometimes, that means that I’m seeking out art works that avoid those topics; sometimes it means that I’m engaging with art works that confront them more directly and wishing that I hadn’t.

But Edith Finch’s more oblique approach has a real virtue for me: it approaches subjects lightly, making those subjects available should I choose to engage with them, but also letting me gracefully skirt around them as I choose, acknowledging their presence but letting me keep as much detachment as I wish.

 

It’s a very impressive second game. The Unfinished Swan had a neat mechanical idea at its core, but while I was glad that it was trying to approach a serious theme, I wasn’t so sure about the way it approached that theme or even the choice of them itself. Edith Finch shows that neither the mechanical inventiveness nor the desire to confront real issues was a fluke; with it, I think the studio is really starting to put something together.

open offices

July 31st, 2017

Over the last week, I saw several attacks on Apple’s new offices, responding to information from this Wall Street Journal article by Christina Passariello: a Six Colors article by Jason Snell; a Daring Fireball (John Gruber) link to Snell’s article plus a, uh, smug follow-up; and a take from Anil Dash.

What surprised me was the definitiveness with which these takes asserted that open offices are bad: for example, Dash says right up front in his headline that open offices are “something their programmers definitely don’t want”. And the reason why this surprised me is that the intellectual tradition about software development that I’ve found most informative comes to the polar opposite conclusion, that shared working space is good and individual offices are bad; and my personal experiences also hasn’t backed up the idea that individual offices are clearly superior for programming. So, while I don’t expect everybody or even most people to agree with me either intellectually or in their lived experience, seeing multiple takes claiming that it’s obvious that the opposite view is correct was a reminder of how different the worlds are that different people live in.

But hey, maybe things have changed over the last fifteen years, or maybe I hadn’t thought through the beliefs that underly my assumptions. So I figured that it’s a good excuse to write up where I’m coming from. Note, though, I am (mostly) not saying that a) people are wrong to not prefer open offices, b) open offices are a good fit for Apple, or c) Apple is doing a good job with open offices. I’m mostly just interested in sketching out the underlying assumptions behind the two points of view, to understand what is underpinning each of them.

 

With that preamble out of the way, I think this sentence from Snell’s piece is a good place to start:

Sometimes I think people who work in fields where an open collaborative environment makes sense don’t understand that people in other fields (writers, editors, programmers) might not share the same priorities when it comes to workspaces.

I’m not a professional writer or editor, but his statement there feels true to me for those fields; as a programmer, however, that statement felt bizarre. When programming, I’m working with a group of other people to produce a piece of software that I couldn’t come close to producing by myself and where I don’t want outsiders to be able to tell which parts were done by which people; to me, programming is a quintessentially collaborative field. (Yes, I realize that solo software projects exist, I’m not talking about those.) So why wouldn’t we want our environment to reflect that collaborative nature?

 

The software development methodology that I feel has worked this line of thought out the best is eXtreme Programming (XP). XP is very focused on breaking down boundaries within a team: for example, code is owned by all of the developers on the team instead of having individual developers own different parts of the code. XP also promotes fast feedback: short cycles even within your daily and weekly development rhythms, frequent releases, and frequent back-and-forth between the development side and the product side of the team.

There are a few reasons for the focus on shared ownership. One is that nobody has a monopoly on the best ideas, even in an area of the code that they know very well; so let everybody contribute. Another is that it allows ideas to pollinate, with an idea over here bearing fruit over there. A third is reducing risk: you can’t reliably figure out in advance which ideas are going to really catch on, and if you want to be able to follup on the successful ones, you want as many people as possible to be able to help; also, team composition changes, and you don’t want to be screwed over if somebody leaves the team. (This is gruesomely known as maximizing your “Bus Number”: the largest number of people who could be hit by a bus and have your product survive.)

As to fast feedback: you don’t really know how a decision will turn out (whether a micro one, like a function name, or a macro one, like a new product feature) until the decision has borne fruit: so get to that state as quickly as possible! A key point here is that product development speed isn’t necessarily the best metric: going very quickly in the wrong direction, without being able to course correct for weeks, is going to turn out less well than going at a more measured pace but being able to course correct multiple times a day.

 

As a result of this, XP explicitly recommends that the entire team (not just programmers, product people as well!) sit in a common space. From a fast feedback point of view: you can get design feedback (whether from another programmer or from a product designer) most quickly if they are literally right there next to you. And yes, that level of proximity really does make a difference: any physical distance or lag in response time noticeably increases the chance that a programmer will go ahead with what makes the most sense to them, instead of involving somebody else, I’ve seen this repeatedly.

And, from a shared ownership point of view: sitting together obviously has symbolic value. But it also means that there’s no barrier to to people working together impromptu as they discover that that’s appropriate; and it means that the natural location for design artifacts (whiteboard scribbles and the like) is in a shared space. Also, overhearing conversations means that you’ll learn something about code that you might be working on next week or even later in the afternoon; or you might overhear a conversation where you realize that you have something of value to contribute, and you can jump in.

 

The flip side of that ambient conversation is that it’s noisy, it can make it hard to concentrate. One way that XP attacks this issue is through pair programming: it turns out that two people working together can tune out outside noise (while not completely disconnecting from their environment) better than one person working solo. Also, it turns out that two people, when interrupted, can get back to full speed on their task more quickly than a single person can, because they can leverage both of their partial mental states.

And pair programming helps with the other goals that I mentioned above. It obviously helps with shared ownership, not only by making a symbolic statement but by giving a high-bandwidth route for knowledge sharing. It even helps in a more subtle way: one surprise that I had when I first started Pair Programming was that, when working with somebody else, when we got to a thorny bit, it would take us about 10 minutes to say “we should ask X for advice on this” in a situation where, when working alone, I’d probably bang my head against that same issue for an hour. And, as to fast feedback: the fastest feedback is from somebody who is in the thick of the problem with you, and pairing largely eliminates the need for a separate code review step because code reviews are instaneous.

There are other XP techniques that help with working in shared spaces, too: I’ll call out test-driven development in particular as helping minimize the negative impacts of interrupts, because it encourages you to work in a way where, at any given point, you have one very clearly stated next micro-problem that you’re trying to solve.

 

XP is a couple of decades old at this point, but I don’t think anything I’ve written above is less applicable now than it was when XP was being created. And, in terms of newer software development trends, I want to call out DevOps: more and more of us are working in a world of cloud software operated by the same teams that are developing it.

And the last thing that I want in a DevOps world is individual code ownership, with people working in isolated offices. In those (hopefully rare!) situations where something is going wrong, I want as many people as possible to swarm on the problem, attacking it in meaningful ways from different points of view, getting it fixed as quickly as possible. And it’s really hard to do that if those same people haven’t all worked together on the software in meaningful ways in non-crisis modes.

Also, from a personal point of view: if I’m on vacation, I want to be on vacation, which means that the last thing that I want to have happen is for me to be the only person who can fix a problem in a piece of code. (Or, if it’s somebody else on vacation, the last thing I want to do is to have to choose between a bad situation for our customers versus my coworker having their vacation interrupted!) I strongly advocate against individual ownership in a DevOps situation, and shared space is really helpful.

 

So, to my mind, that’s what open offices are optimizing for: collective ownership and fast feedback. Whereas individual offices are optimizing for concentration: the ability to get into flow, and the ability to hold complex problems in your head at once.

And those are obviously good things! But I don’t see them as unalloyed good. Flow is great, it helps you work at high speed; the main question I have is whether that high speed has you going in the best direction. (And also, this is an area where pair programming helps as well: pair flow is a thing.) And, if you’re working on something that’s inherently complex, then yeah, you want to be able to hold it in your head; but better still to get that task done while making it less complex, which is where incremental progress, test-driven development, and refactoring come in.

At any rate: I think that both points of view are coherent ones, and can be carried off well. As a development team, pick what you want to optimize for; as an individual, pick what matters most for you; and then make it work in the context you’re in. You don’t have to carry out either plan in all of its force for it to work, either: for example, while in general both theoretically and in my lived experience I prefer the XP ideas, the truth is that I’ve spent very little time pair programming over the course of my career, and it’s been okay, I’ve still gotten a lot out of shared ownership, incremental development, test-driven development, etc. (And I’m open to the possibility that I would be a more effective programmer if I spent more time pair programming.)

 

A postscript on the Apple-specific questions here. First, I have no idea if Apple is doing a good job with their open offices; looking at the pictures, I can see spaces that look like they’re plausibly a good size for a single development team, but who knows, and I also don’t know whether those glass walls would mean that you’re constantly being distracted by other teams or if they would end up a welcome source of light. And I have no idea how representative the few photos in that Wall Street Journal article are of the campus as a whole.

In terms of Apple’s culture: I’ve never worked there or spent a lot of time talking to people who do work there, so I have the farthest thing from an informed opinion; Snell and Gruber have a lot more info there. (Though at least I do work as a programmer, not as a writer!) But, honestly, I’m dubious of open offices succeeding as a general rule in Apple’s development culture: this is the company that publicized the notion of Directly Responsible Individual, which is pretty much the opposite of the collective ownership approach that leads to open offices. (And I’ve heard multiple anecdotes about specific pieces of software been written by individuals, too.)

So if I were in that sort of culture, and if I knew that my neck was on the line for some specific piece of code, then yeah, I might want to spend time in my office working on that code instead of talking to other people: it might not turn out as well, I might make mistakes without realizing it, but they’d be my mistakes. And I wouldn’t be able to help other people as much; that would make me sad. So, all things being equal, I’d prefer not to work at a company like Apple that loves the idea of DRIs, so I might sort myself out of Apple.

I am curious how much the above still holds in current Apple, though. For one thing, Tim Cook seems a lot more focused on collaboration than Steve Jobs seemed to have been; maybe that’s filtered down through the company. (Though I haven’t heard about the DRI concept going away.) For another thing, Apple’s software has changed with the time: they run a lot more services than they used to (which, as per my DevOps comments above, says to me that shared ownership is the right approach), and clearly their OS development is much more incremental than it was a decade ago, with a regular yearly cadence and with significant changes appearing even in point releases. So it wouldn’t shock me if there are increasing numbers of software development teams within the company that prefer open working spaces.

 

Some Twitter thoughts from others that struck me:

monument valley 2

July 19th, 2017

Monument Valley 2 is basically just what you would expect from a Monument Valley sequel, with the added twist that the second character that they’ve added to it has the most amazingly charming movement animation that I’ve ever seen. I could try to go on about it for hundreds of words, but ultimately: if you haven’t played the first one, probably play it first; if you have and liked it, play this one too!

acupuncture

July 13th, 2017

Miranda has had very bad migraines for much of this year. We’re not sure why they’ve gotten so much worse / more frequent this year, and the initial treatments her doctors prescribed were almost completely ineffective, and in some cases may have made some aspects of the situation worse. Eventually, we found a medication which led to a more noticeable improvement (though also with a more noticeable side effect), but even with that we’ve gotten pretty desparate for effective treatment options.

So Miranda wanted to try out acupuncture. I wasn’t against giving acupuncture a try, given how ineffective Western medicine had been so far, and when I mentioned the acupuncture in a couple of random conversations, I got surprisingly strong positive reactions: people saying “I had serious migraines, and acupuncture made a big difference.” So I asked my fellow Tai Chi students for recommendations (since I figured they’d be more likely to have taken acupuncture than other social groups I’m part of), and made an appointment with one of the recommended acupuncturists.

 

As somebody raised by scientists, I do not feel entirely comfortable with this. Though, as somebody with a fondness for mysticism (and as somebody who is the sort of person who would take Tai Chi classes), there’s a part of me that’s favorably inclined towards this sort of thing; but that, in turn, raises my scientist part’s alarm bells even more, because it suggests that it will be harder for me to evaluate acupuncture dispassionately.

Now, when I say that I don’t feel entirely comfortable, that doesn’t mean that I think that trying out acupuncture is an actively bad idea. As far as I can tell, acupuncture is unlikely to be harmful, and Western medicine has done a pretty bad job treating her migraines so far: so it’s not like I’m comparing acupuncture against a treatment with solid experimental evidence for its effectiveness. And I also don’t feel like I have a super strong reason to believe that acupuncture shouldn’t be effective: we’re not talking homeopathy here. But I still do want to try to figure out how I should evaluate the acupuncture treatments, what I should look for.

And, of course, what makes that evaluation particularly different/interesting is that, in a Kuhnian sense, Miranda’s acupuncturist is working in a different paradigm than Miranda’s other doctors use, or than I’m comfortable with. (I get some experience with a Traditional Chinese Medicine paradigm in my Tai Chi classes, but I don’t feel that I understand it at all well even as an outside observer, and I’m certainly not able to act natively within it.) On the one hand, that makes me inclined to treat Traditional Chinese Medicine with more respect, with an assumption that there’s something to the richness of the paradigm, even if it’s wrong in some aspects. But, on the other hand, I’m not sure that’s a justified assumption at all: I also have the belief that there’s nothing of significant value in pre-Copernican astronomy, even though it’s a paradigm that was developed over the course of centuries! At any rate, the difference in paradigms raises the question of how I’ll be able to tell things that seem wrong because they’re from a foreign paradigm apart from things that seem wrong because they don’t work; or, for that matter, things that seem right because they’re explained in terms of a rich conceptual framework apart from things that seem right because they work.

 

Question zero, then: can we see any concrete effect at all from her acupuncture treatment? There was actually a surprising effect during her first session, namely that Miranda’s hands were a lot warmer. Which is enough to disprove a null hypothesis, but not directly relevant to her therapeutic goals; so next we turn to question one, whether we see an affect on headaches. And there, too, we have an answer: she also had something of a migraine that day (not a horrible one, but definitely noticeable), and her headache decreased significantly over the course of that first treatment, beyond what Miranda was used to from chance variation.

So that was a real success. The other interesting aspect of the session was what all it entailed: I’m not sure exactly what I expected, but I’d assumed that it would basically just be needles. It wasn’t, though: the acupuncturist had some neck exercises that he wanted Miranda to do while she was getting the treatment (and he encouraged her to do those outside of treatment, too, saying they would lessen the pain even without acupuncture), and he also did some physical manipulations with her arms as well.

 

That physical therapy aspect made me actively happy to continue. Because one aspect of current Western medicine that I don’t feel entirely comfortable with is its focus on pills and similar techniques: it feels to me like the (laudable) focus on experimental evidence for techniques imposes a bias that makes doctors less likely to focus on other techniques, techniques where it is harder to gather crisp experimental evidence.

So, while I’m happy for Miranda to keep on taking pills, I also don’t have a particular reason to believe that a chemical approach is the sole potential route to success; and if her acupuncturist is not only taking a completely foreign route (acupuncture) but pairing that with a different, less foreign route (physical adjustments), then that feels like it should increase the odds that, somewhere across all of the approaches we’re taking, we’ll find a treatment that works.

 

In that first session, Miranda’s acupuncturist was focusing on her neck, specifically on one of the vertebrae there, and that focus has continued: there’s something around one of the vertebrae that he thinks is enlarged in a way that causes problems, and he’s adopting techniques to try to shrink it. Which sounds totally plausible to me: I can easily translate vertebra problems into ideas like a nerve being pinched or blood flow being constricted, and I can imagine that that could affect migraines. (Admittedly, maybe I’m overindexing on spinal issues because of my own back problems.)

But I also just like seeing a combination of repeatedly focusing on one metric and seeing short-term pain relief actually result from the techniques that he’s using and recommending there. (Miranda reports that doing the neck exercises helps moderate the strength of headaches outside of acupuncure, too.) I like this because it gives a testable hypothesis; and I like that it gives me hope that it could provide long-term relief instead of just short-term relief, because if this vetebra hypothesis is correct and if he can shrink whatever he’s looking for there and have it stay shrunk then that should help the pain long term. (Which would give a positive answer to question two: does the treatment reduce headaches over the long term, not just the short term?)

 

The neck treatments are easiest for me to accept within my conceptual framework. But a lot of the acupuncture needles aren’t actually in her neck (in fact, I don’t think that normally any are there, though I can’t rember for sure): they’re on the top of her head, on her feet, on her back, or on her hands, and they’re in different places from week to week. When I asked about this, her acupuncturist explained it in terms of creating a path for the qi to flow, if I’m remembering correctly; I’m honestly not entirely sure what he was looking for to decide which pathways to enable which times.

And this gets back to the concept of working within different paradigms: clearly he’s working within a different one than Miranda’s western doctors. There are a few possibilities here:

  • The differences in needle positioning from week to week are all for show.
  • The differences are for a reason, but not a well-thought-out one.
  • The differences are a manifestation of his expertise within his paradigm, but that paradigm isn’t an effective one.
  • The differences are a manifestation of his expertise within his paradigm, and that paradigm is an effective one.

I could be wrong, but I’m fairly sure that the first two of these are not what’s going on: her acupuncturist does seem to me like he is an expert, I think we were probably fairly lucky to have found him.

It’s harder for me to decide between the third and fourth explanation. I would like to believe in the idea of qi; but I also have a hard time figuring out how there could be a concept like that that doesn’t map directly to some sort of standard Western medical concept (e.g. blood flow) and that we haven’t figured out how to make machines that can detect it.

So I can’t really justify that fourth explanation; we’ll see if, once I’ve done more Tai Chi, I’ll have had more experiences that cause me to believe in qi as a useful analytical concept, though.

I guess there’s a bifurcation of the third explanation, though: it could be that the paradigm is incorrect but effective? (Which, I guess, would mean that the qi explanation is wrong but he’s still doing something useful in putting the needles in different places at different times, and for deep reasons rather than just because, say, variation is effective no matter the details of that variation.) I’ll have to think about that more, though: it may be that saying “the paradigm is incorrect but effective” actually just means “the paradigm is an accurate paradigm but people outside the paradigm don’t understand it”. And that, after all, is the point of a paradigm: you need to shift into the paradigm to be able to understand it!

 

Ultimately, I just hope that the acupuncture is effective, because I really want Miranda’s migraines to become and stay manageable. Or rather, I hope that one of the treatments is effective; even if that ends up being the case, we probably won’t be able to tell which one made the difference (or if none would have made a significant difference alone and a combination was necessary). It would be nice to have a better idea of whether acupuncture works, but we’re not in a situation where we have the luxury of taking an approach designed to maximize scientific learning.

mass effect: andromeda

July 5th, 2017

Mass Effect: Andromeda starts off by dropping you into the middle of the action. You’re part of a large-scale colonization mission in a new galaxy, things have gone wrong upon arrival, you’re part of a team sent out to investigate, and your ship has crashed. This sort of opening is one of the series’ strengths: each game alternates between active sections and quiet sections, and when it’s on, it’s on.

The game puts on the brakes right after that beginning, though. You immediately learn a scanning mechanism, and so, instead of dealing with the consequences of the crash, you’re stopping constantly to look around in your environment. Which slows things down, but at least it does so in a way that fits into the narrative context: it’s your first time on a planet in a new galaxy, so of course you’re going to look around! And it’s not so far out of character for the series: Mass Effect games have always been dumping facts into your codex, so while it’s more extreme here than in prior games, it’s a difference in degree instead of in kind.

One of the codex entries that you have the option of reading discusses first contact protocols: you’re supposed to do everything you can to avoid shooting at new species that you meet. And, sure enough, a few minutes later, you meet an alien in a tension-ridden setting.

At which point the first contact moralizing goes out of the window: your options turn into shoot first or else let them shoot first and return fire immediately. Which wasn’t surprising: you’re playing an action series, it’s pulpy, it’s a lot more in character for the series to introduce an alien species that’s mysteriously evil than to build a game where finding ways to avoid shooting was a real possibility. I’d like to see a game that seriously grapples with that sort of First Contact question, but I don’t expect Mass Effect to be that game.

 

Equally unsurprising but more disappointing was what that intro mission gave me next: the alternate objectives. If this were from the original Mass Effect trilogy, you would have had a straight shot through the level, and the game would have used that to excellent narrative effect. But this is the BioWare that made Dragon Age: Inquisition; that meant an open world map, multiple objectives to choose from, and with some of those objectives in active conflict with the dramatic direction of the level.

I thought about skipping the objectives that didn’t fit into the flow: I didn’t trust the game to give me alternate objectives without losing the flow of the level, and I figured that I’d have another chance to explore the world later. Ultimately, though, most of the objectives were close enough to my path that I completed them. And, as it turns out, you actually can’t return to that initial world; I have no idea why.

 

I liked the game as a whole more than the initial segment; but this game really is not what I’m looking for in a Mass Effect game. Specifically:

  • They’ve added back in a manually managed inventory, and made it worse by sticking in a crafting mechanism.

The environments aren’t quite as littered with objects to collect for crafting as Dragon Age: Inquisition was, but it’s pretty bad: the scanning (which feeds into the research part of crafting) is a constant distraction, and there are ores to gather to feed into the construction part of crafting. And, of course, there’s an augmentation slot aspect of crafting, so you can’t even do a straightforward survey of weapon types and focusing on the ones that fit your playstyle, there’s a significantly more constrained resource on top of that.

  • The ability usage in combat was surprisingly restrictive.

They give you full access to the ability tree; there are some nudges towards limited specialization, but that’s fine, it still sounds like an improvement. Except that then there’s the way you use the abilities in combat: unless I’m missing something, you don’t have easy access to all your abilities during a fight, you have to put your abilities into loadouts, so for any given fight you can only easily get to three of your abilities. So, in practice, what that means for people who don’t really want to dive into combat is that we pick our three favorite abilities and never use any others, which is a disappointment: I don’t want to become an expert in the combat system, but I’d like to play around with it more than that, but instead a system that could have been freeing compared to earlier games turned out to feel more limiting.

  • The world building was way too pulpy.

I don’t expect a Mass Effect game to be the most subtle in terms of the questions it asks, but Andromeda is a step down, and the First Contact question that the opening sequence fails to range is at the core of that. You’re exploring a new galaxy, trying to build a home there with no backup; so if you run into trouble, you’re screwed. And it turns out that things go wrong right from the start: you don’t have to make trouble, it’s finding you already.

I didn’t like the way you had to fight with the Kett right at the start, but I can accept one unexplained bad guy. But there were these machines you have to fight, and I start to have more questions: we’re trying to learn about a new galaxy, focus on learning! And there’s a friendlier species that you meet in the galaxy; mostly you’re on good terms with the Angara, but there’s a group of them that you need to fight as well. And then then there are groups of criminals and other breakoff factions who came with you from the Milky Way: you get to kill your fellow humans too.

I could justify any of these individually, or maybe any of them other than the last one; but, except for the Kett, they all work squarely against the dramatic setup of the game. You’re a small, isolated group of settlers from the Milky Way, and are in an environment that’s already tearing you apart; given that, you need to understand your environment, you need to make allies, and you need to just stay alive! I’m certainly not going to claim that humans have historically always behaved peacefully when exploring new territories (quite the contrary), but this game isn’t setting up thoughtful historical analogies, either: to me, the thought process felt like “this is an action game, so we need to be able to shoot people, so shoot away”.

  • Too many fetch quests.

You show up on a new world, start at a big settlement, and everybody has something for you to do for them. And then you explore the world more, run into smaller settlements, and are given more tasks that actively work against the grand scale of the plot. (And then there are the cross-world tasks: find these minerals hidden in out-of-the-way places.)

Honestly, I’m actually surprised I didn’t mind this more: it turns out that, as long as I was given a reasonable emotional reason to go along with a quest, I was willing to do it. The only ones that turned me off were the ones that were telling you to do a certain number of a thing (find five crashed drones, or whatever); so I skipped those.

Which, you could say, is a strength of the game: in this as in many other areas, the game gives you a range of possibilities to explore, and it’s up to you as the player to decide what part of that range of possibilities you actually do want to explore. I think that’s mostly a cop-out, though: games as a whole give me a fine range of possibilities, so once I’ve picked a game to play, I want it to be the best of its kind of game that it can be, rather than being a half-assed mashup of lots of different options that it expects me to choose from.

  • The plot is either pretty good or pretty bad, depending.

It’s pretty good compared to the overall range of games I play; but this is a BioWare game, so a good plot is what I expect. And, compared to other BioWare games, this was definitely on the weak end: the overall story didn’t raise any interesting big questions (or, to the extent that it did raise such questions, it actively shied against taking them seriously), the companions and loyalty quests on average don’t reach the quality that I expect, and the major plot missions felt by the numbers.

 

Having said all of that: I’m still happy to have played the game. But I’m happy in the sense that the worst BioWare game is still a pretty good game; and this is the worst BioWare game I’ve played. This plus Dragon Age: Inquisition make it clear that the studio is going in directions that I’m not interested in, that play against their prior strengths, and that the new directions aren’t executed well enough to draw me in; and Mass Effect: Andromeda is a worse Mass Effect game than Dragon Age: Inquisition is as a Dragon Age game.

So BioWare is squarely off of my “will buy without asking questions” list. (Even for their RPGs; and I almost certainly have no interest in whatever Anthem turns out to be.) And I’m starting to seriously wonder to what extent BioWare still exists: it’s been long enough since they were acquired by EA for their previous culture and knowledge to have been significantly diluted.

They had a glorious run, though…

text editors and markdown

June 17th, 2017

When I got my new Mac, Sublime Text started occasionally crashing on me. And, while I do like Sublime Text more than Emacs for non-programming typing, I wasn’t in love with Sublime Text, either: it still feels like a cross-platform editor that wasn’t focused on presenting a clean interface. Also, at about the same time, I was thinking about moving my blog post writing outside of the WordPress web interface: to something outside of a web browser with a nice clean interface. (Not that WordPress’s focus mode isn’t good for that latter criterion!)

So I poked around a bit, looking at macOS-native editors that were focused more on writing than on programming. I wasn’t sure I was going to use it exclusively on prose, I was thinking I might use it to maintain the various lists and what not that I have as part of my GTD setup, but blog post writing was the main initial use case.

 

The editors that I came across supported Markdown. Which makes sense: I was looking at plain text editors, but that doesn’t completely remove the desire for styling and links and what not, and Markdown is the consensus choice there. But it was a change of pace, since, for this blog, I’ve actually been writing the little bits of the HTML in the posts (links, etc.) by hand instead of using WordPress’s rich editor; which is fine, it’s not particularly tedious.

But Markdown is fine too; and, actually, if my goal is to have a clean interface, than Markdown is better than HTML. The one thing that gave me pause there is that I use <cite> tags for book names and the like; the truth is, though, that while I was convinced by the arguments for semantic tags when I first saw them decades ago, in practice I’ve never wanted to style <cite> differently from <em> and I’ve never written code that goes through a document and pulls out the <cite> tags for bibliographic purposes or anything. So, ultimately, I don’t see HTML as about semantics any more; I’ll live with putting underscores around book names. (Using asterisks for italics still feels weird to me, though: asterisks should be for bold!)

 

The first editor I tried was Ulysses. It actually looked a little more ambitious than I necessarily wanted: it looks like it’s designed to let you write an entire book with it if you want. And I wasn’t sure if I wanted something with a multi-pane model, though given that you can easily hide the panes other than the one you’re writing in, that wasn’t really an active strike against it.

When I gave Ulysses a try, I enjoyed it: composing was pleasant, exporting to HTML so I could post to my blog wasn’t too bad. The main downside was that I couldn’t type raw HTML; that was while I was still unsure what to do about <cite> tags, but I got over that, but a little more problematic was that I’m used to having a paragraph with only &nbsp; in it to create an extra blank line, and I couldn’t figure out how to do that in Ulysses.

Still, seemed good enough; I figured I’d try other options, but if Ulysses was where I ended up, then great. Except that then I used it to edit my GTD reference file; and, when I looked at the file through another method, I found that it had moved all of my links that had just been pasted in raw to the end of the file (using one form of the Markdown link syntax), even in parts of the file that I hadn’t touched!

And that really wasn’t cool – partly because I don’t like that sort of messing around behind my back, but also partly because, ultimately, I want a plain text editor rather than a rich text editor. And, philosophically, it seems like Ulysses is a rich text editor: it uses Markdown as a representation format, but it doesn’t want you to care about that representation. Which could actually be fine for blog posts, but does limit the contexts in which I’d be willing to use the editor.

 

Next on my list to try was Byword. It’s simpler than Ulysses: no three panes, no focus on projects, it has you editing one file at a time.

And, it turns out, it’s much happier than Ulysses to accept whatever you type. If you type HTML tags or entities, it’ll pass them through unscathed during HTML conversion; and if I open a file with a naked link in it, it leaves the link there unscathed instead of moving it around.

Byword claims to be able to export to WordPress installations; when I was first looking at that, that was a paid extension to the app, but they made it free a couple of days later. Which is good, because it doesn’t work when publishing to my blog; I’d actually e-mailed Byword support when it was a paid extension to confirm that it should work for self-hosted blogs, but it doesn’t work for me. No idea what’s going on there; but the amount of work that would be saved by that is trivial, it’s very easy to copy and paste the HTML.

 

So I stopped my search there, and I’ve written my last ten or so blog posts in Byword. And it’s been nice! Honestly, not that much nicer than just writing in the WordPress editor directly, but still: I do somewhat prefer typing in a separate app, in a window with basically my words and nothing else.

Arguably more importantly: from a philosophical point of view, I’ve now switched to Markup as the way to go, instead of HTML or ad-hoc plain text. Slack and Github had been moving me that way anyways, good to have that formalized.

fire emblem heroes

June 13th, 2017

The question that free-to-play games always raise is: why am I playing this game? And I don’t mean that in a dismissive way, as an implication that playing them is a waste of time: if you can come up with a good answer to that question, then great! But free-to-play games do try to nudge you to keep on playing for their own reasons, so you always have to do a sanity check as to your motives.

When I started playing Fire Emblem Heroes, I did have good reasons to play it. I like the core Fire Emblem gameplay; and, actually, when I stop playing games in the series, it’s usually because the levels are getting too intricate. So, with that lens, Fire Emblem Heroes’ four-on-four levels are an active virtue: they shrink down the scale, so levels never get out of hand. Instead, the game focuses on tactical details; I appreciated that focus, and I learned more the more I played.

The other aspect of the game is the collection aspect. Which wouldn’t have done anything for me six months earlier, but after playing Tokyo Mirage Sessions, I was more than happy to see some of my favorite characters. (Though also a little disconcerted to see the differences in their presentation between the two games!) And sure, it’s fun to try to hope to get five-star characters, to level party members up, to try out temporary challenges.

But, at some point, I’d gotten enough five-star characters to fill out my team, I decided that it was going to be not worth it to me to try to get a team that was better on whatever metric I was looking at, and I felt that I wasn’t learning enough from the levels. So I stopped playing; and that was the right choice. But playing the game daily for a month and a half was a fine choice, too.

the nintendo switch

June 10th, 2017

My normal policy for buying consoles is that I buy a new console when the game that I clearly want to play next is only on that console. Which means that I buy all of them eventually (well, all of them except for the portable Sony consoles) but that it sometimes takes a few years: I bought an Xbox 360 when Mass Effect came out, I bought a PS3 when Journey came out, and I still haven’t bought a PS4, though that will change very soon! (Right now, the three games that I’m most interested in playing next are all for the PS4.)

So if you’d asked me at the start of the year when I was going to buy a Switch, I wouldn’t have known: maybe the end of 2017, maybe in 2018, maybe even after that? As the release date approached, though, it became clear that the newest Zelda was well worth playing, so I bumped up that estimate; but Mass Effect Andromeda was coming out first and the Switch was supply constrained, so I was happy enough to wait on buying a Switch.

Once the Switch was released, though, it sounded like Zelda was better than just well worth playing, that it really might be something special. And talking about it with coworkers got me more excited; so, a week or two after launch, we found a deal where we could order a not-unreasonable bundle through GameStop, and three of us ended up with Switches (plus Zelda and Mario Kart).

I’ve only had the console for a couple of months, I’ve only played (parts of) two games on it, but: it’s been almost two decades since I’ve been as impressed by a console as I am by the Switch.

 

Most non-Nintendo consoles these days aren’t even trying to impress you in the same way the Switch does: they’re doing the exact same thing that the Nintendo 64 or the original PlayStation did, just with better graphics. (And with online capabilities, online multiplayer in particular, but that’s not something that any one console can point at, and it’s coincided with local multiplayer being taken less seriously.) There have been occasional attempts to provide differentiators, e.g. the Kinect or an attempt to be a home media hub, but they never amount to anything, with the arguable exception of the DVD player / Blu-Ray player functionality in the PS2 / PS3.

Nintendo is different: the DS added a second screen, touch controls, and a microphone to a portable console; the Wii added motion controls, a direct pointing device, and a speaker in the controller; the 3DS added 3D visuals; the Wii U created a TV/tablet hybrid. And many of the ideas from those consoles were successful, but only partially: the DS’s second screen is probably most useful as a way to increase the screen real estate in a small form factor that you can put in your backpack without worrying about it getting hurt, its touch controls were good for the time but not something most games ended up being best with in practice, and its microphone never amounted to anything. Wii Sports was a revelation, but it turned out that there just weren’t that many games that used motion controls well, direct pointing also didn’t take off, and the controller speaker was a gimmick. The 3DS’s new addition had the least impact (even though the console was successful): people promptly turned it off, and Nintendo eventually released a version without the 3D support. And the Wii U was a flop from the beginning: Super Mario Maker is the only game I’ve heard of that really made a convincing use of the tablet, and while I haven’t played it myself, I’m not even sure that that game requires the tablet / TV hybrid, as opposed to just a tablet. (I have heard some people getting good use out of playing games in tablet mode rather than on the TV, though.)

 

Not the Switch, though: everything works. It’s a full-power (at least by Nintendo standards) console in a portable form-factor: so I can play one of the best and most beautiful games I’ve ever seen in bed or lying on the sofa away from the TV; if nobody is using the TV, I can play it there instead; I can play it sitting in a chair next to Liesl while she’s watching TV; and if the battery starts running low while I’m playing on the couch, I can switch to the TV and keep on playing. These are not theoretical examples: I have done all of these, and I split my playing time pretty evenly between handheld and TV modes.

Nintendo’s TV consoles have always been good at local multiplayer, and Miranda and I have had plenty of fun playing Mario Kart together at the TV. But I also throw it into my backpack every Friday and play Mario Kart at work; and the local multiplayer just works, with no configuration necessary. We have two or three consoles available; and if other people want to play, we just pull the controllers off the side so two people can use a single console, and the whole table ends up playing. It’s the most flexible way of assembling groups to play in the same room that I’ve ever seen; and, again, this isn’t a theoretical example, this is something I do every week.

 

When I watched the preview videos, I assumed that those scenarios were largely unrealistic; but, in fact, everything there not only works, it’s genuinely useful. Also, in the past I’ve been a little frustrated where I want access to both of Nintendo’s current consoles, but there weren’t quite enough games for one or the other for me to feel great about buying it; if it turns out that the Switch allows Nintendo to proceed with a single console line, that concern will vanish.

Just having all of Nintendo’s first party games would be a solid supply of games, and if it can get Nintendo’s first party games plus the third-party games that had been appearing on the DS series then it will an excellent console purely from the game supply point of view. But that combined with the console’s remarkable flexibility make the Switch really something special, at least if my first two months with it are any indication.

experimenting with glasses

May 29th, 2017

For a few years now, my optometrist has been nudging me to consider either reading glasses or progressive lenses. For my last prescription, he split the difference between reading and distance; that mostly worked fine, but Hamilton convinced us to get season tickets for an SF musical company, I was a cheapskate, our tickets were in the back of the theater, and: the stage was blurrier than I would have liked.

So, between that and Miranda’s good experiences with a local Warby Parker store, I decided to get glasses that matched my distance prescription this time: if that worked, great, if not, Warby Parker’s cheaper prices would allow me to experiment with other types of glasses as a supplement.

And it was refreshing wearing my new glasses and being able to see farther as I walked around town. It was, however, not so refreshing to have a hard time reading things that were close up: books, my phone, and even my computer screen at work. I managed, and actually I learned something about practical optics: if I move my glasses further down my nose, then I can focus significantly closer than I can otherwise. So those glasses are workable no matter the situation; but they have significant flaws.

 

I decided to try out progressive lenses next. I’d assumed they’d be cheaper at Warby Parker than at my optometrist’s (especially without the insurance discount); I still assume that this is true, but they’re plenty expensive at Warby Parker, about twice the price of regular prescription glasses. But, at any rate, I needed them (or at least I needed something different), so I bought a pair.

And they were better than the distance glasses! Or at least, better on average, but there were a few problems. One is that they don’t go at all well with a 27" monitor: it was impossible for me to have the whole monitor in focus. (It seems like the cutoff for my prescription was around 21 inches; I ended up moving my windows to a smaller portion of my monitor, and that worked okay.) And the other is that my right eye never felt completely right: no matter where I looked I didn’t see an area that I was completely comfortable with for close reading. (Not sure if that was a manufacturing defect or a prescription defect.) Don’t get me wrong, it was usable for reading, but still: a little off.

 

So I decided to get reading glasses as well. Warby Parker again; I figured that, this time, I’d use their website rather than their store, their website must be good given how they position themselves in their advertising?

Not so much, it turns out: in fact, I’ve never seen a purchase flow that was offputting in quite this way. I selected the frames that I wanted (and that part of the flow was fine); the next step seemed to be to the checkout flow. So I went to check out, and saw an option for credit card or Apple Pay; that was nice, I selected Apple Pay.

At which point I hit the first problem: they wanted me to put in my fingerprint right then. I was not about to do that given that I hadn’t entered enough information for them to be able to even show me an accurate price! So I swiched from Apple Pay to credit card; from the look of the screen, I should have been able to input my prescription info before actually entering the credit card info, but they insisted on having me enter my credit card info before they’d let me enter my prescription.

I almost stopped right there: I have no idea why they’re requiring complete payment information before showing me a price and complete order details, but it’s ridiculous. I reluctantly continued, though, at which point I ran into the next road block: the prescription information.

They had my progressive lens prescription; I wanted to select reading glasses based off of that, but I didn’t see an option to do so. Which is fine, it’s a pretty niche case, but then they told me to put in a scan of my prescription. And that doesn’t work with my situation any better than using my prescription on file: again, no obvious way to specify the reading version of the prescription.

There were a couple of other options, e.g. have them call my optometrist, but nothing that actually helped: in particular no option to HAVE ME ENTER THE DAMN NUMBERS MYSELF. Like the payment situation, I have no idea why they’ve designed their checkout flow that way, but that’s where I bailed: I am not about to give them money if I don’t even know if I’m going to get glasses with the correct prescription. (Are glasses prescriptions legally restricted the same way drug prescriptions are? I would hope not, but if that’s the reason, then tell me!) So: two design choices that seem bizarre for a business that markets itself in internet-focused ways.

 

As anti-Warby-Parker I was at that point, I still figured: they’re going to be significantly cheaper than getting them through my optometrist, and Miranda has had good experiences with them. So I, somewhat reluctantly, ordered my reading glasses through their physical store. And the glasses were totally fine! (In particular, no problems with the right eye, which, on the one hand, lends credence to the “manufacturing problem” theory for the progressive lens pair, but, on the other hand, gives evidence that they can manufacture standard prescriptions well?) I continued to wear my progressive lenses most of the time while keeping the reading glasses at work, and now I have no problem using the 27" monitor there.

Except that I didn’t always remember to switch back to the progressive lenses when I came home. And, the second time I did that, I realized: not only are the reading glasses fine for normal use (not wonderful for distance viewing, but acceptable even then), but they really are more relaxing on my eyes than the progressive lenses.

So, for the past couple of weeks, I’ve been using the reading glasses most of the time. I’m keeping the distance glasses in my backpack (while the progressive lenses are stashed in a desk at home, unused), and I’m trying to find excuses to wear the distance glasses every once in a while so my brain gets practice switching between the two prescriptions: I try to wear them when I’m driving a reasonable distance, when I’m doing Tai Chi, or when I’m going to a musical or a movie or something. And, so far so good.

 

I’m not entirely sure what I’ll do when 2018 rolls around and my insurance discount is available again. If you’d asked me a month ago, I’d say: I’ll get a set of progressive lenses from my optometrist, and then both eyes will work well. But, right now, I’m kind of against progressive lenses; maybe I’ll just leave things as is? Or maybe I’ll ask my optometrist to write me an intermediate prescription again, that’s usable for reading but doesn’t get blurry quite as quickly? (My vision doesn’t change much year-to-year these days, so I’ll still have my distance glasses available for situations where I want it.) Heck, maybe I’ll see if I can get old-school bifocals; I’m really not sure…

(And, no matter what: in the future I’ll get closer up theater tickets: a fairly crisp distant stage is better than a blurry distant stage, but it’s pretty small either way!)

portal and portal 2

May 25th, 2017

Earlier this year, I had a bit of time before the big Spring game releases came out, so I decided to replay Portal and to play Portal 2 for the first time.

I’d liked Portal the first time I played it; I really liked it this time, to the extent that I think of it as a sort of local maximum in the design space. It’s a puzzle game, with a puzzle idea that was new at the time and that is still interesting on replay. It spends a few short levels gently introducing you to the concept; and then has quite a few, still short, levels unpacking consequences of that concept. And then, once it’s given you that foundation, it switches over to a more narrative mode where the puzzles are presented in a less isolated environment; it ends in a short boss battle. And it does all of this in three hours; there have been times in my life when I would have wanted more, but now I very much appreciate the respect the game is showing for my time.

That’s the game from a mechanical lens, but there’s also the game’s narrative aspects to consider. It starts out with you waking up in a facility, not knowing who you are. And, as you play through the levels, you get more familiar with GLaDOS: she’s quite funny (as is the game as a whole!), but there are horror undertones that turn into an equal match for the humor; and then, as you go further, you realize that she’s really a psychologically damaged individual. Like the mechanics, the narrative comes together in the boss battle; and it gets capped off by one of my favorite songs in all of video games.

 

After finishing Portal I moved on to Portal 2. I was a little worried that, after just having played through a bunch of portal puzzles, I’d see repeats or excessive complication, but I enjoyed the new puzzles. And I wasn’t sure how they’d follow up the story, but there was a new character who was entertaining, GLaDOS was back, and I got to learn about the facility. So: all good.

Until it wasn’t. It was inevitable, I think, that the sequel would introduce new mechanics; the new mechanics were fine, but the game lost its simplicity with their addition. Or at least they were fine until I hit the white paint, which accepts portals: so, if you see a surface that you want to put a portal on, then you paint it and become able to create a portal. But not all surfaces accept paint at all; whereas if a surface does accept paint, then you’ll be able to get paint there fairly easily, painting the paintable surfaces is rarely a challenge. So the upshot is that white paint adds no interesting complexity to the levels: instead, you analyze them just like a standard portal level, and then have this extra tedious step of splashing paint everywhere in order to be able to figure out how good your analysis is.

The narrative side was a letdown as well, albeit for opposite reasons. Wheatley showed warning signs early on: he was mostly a buffoon, unlike anybody in the original. GLaDOS started to show a bite, but then rather than leaning in to the consequences of incinerating her in the first game, the game stepped back and decided not to treat her seriously, instead placing her consciousness into a potato. (Because science fair potato batteries are hilarious! Har har!) And, the more you learned about the facility’s history, the less you could treat it seriously either: the game just repeated the joke of the facility’s test subjects being cannon fodder over and over again.

To be sure, the original Portal also repeated that joke; but they did it without turning the game into a farce, because there were other strains giving texture to the narrative. And, while I don’t have anything against farces, and actually I can imagine enjoying Portal 2’s world quite a bit in other contexts, it wasn’t a well enough done farce to stand next to its predecessor.

So, soon after encountering the white paint, I gave up: I didn’t want to explore the new mechanics and actively disliked one of them, even the traditional portal puzzles were getting complex in ways that I didn’t particularly enjoy, and I didn’t trust the game’s narrative enough to make me want to push through to see the endgame’s narrative payoff. Not that I think Portal 2 was a bad game, I’m happy to have played through the first half of it; but I also haven’t had second thoughts about stopping when I did.

remastering rocksmith

May 22nd, 2017

Rocksmith put out a two quite substantial free patches at the end of last year. I’ve been playing Rocksmith significantly more since the patches, and in different (and better!) ways.

As I said in that post, I’ve switched my practice from being primarily based on browsing (leafing through my favorites and playing whatever catches my eye, basically) to one that spends more time working on actually getting better. Not that I wasn’t getting better with my prior approach — I learned a huge amount from that playing — but my focus was more on enjoying the music and the experience than on constantly ratcheting up.

It’s been quite a while, but I actually do have quite a bit of prior experience on the receiving end of music lessons. And my basic approach towards practice back then was pretty different from the browsing approach I had been taking with Rocksmith: I’d always have a handful of songs I was working on, I’d play the hard bits over and over again until I got the notes into my fingers (trying out different fingerings, phrasing, etc.), and as I got past mechanical issues I’d think about my performance from the point of view of its musical qualities.

Which seemed a lot more productive than my browsing approach! Alternatively, if you want a theoretical way of thinking about this, I should engage in Deliberate Practice. Don’t spend as much time in my comfort zone: spend more time beyond it. Not floundering, though: find some specific tasks that are slightly beyond what I can do right now but are in reach, work on those tasks until I can accomplish them, and then ratchet up by switching to new tasks that are at the edge of my new capabilities.

 

So I’ve switched the format of the bulk of my Rocksmith practice. Concretely, at any given time, I have four songs that I’m working on. (It used to be five songs, but that was a bit too much, I would end up coasting on a couple of them.) And, for each song in each practice session, I would ask a question: how specifically am I trying to push myself today while practicing this song?

If I’m still learning a song, or if it’s a song that is out of my fingers’ comfort zone, that almost always translates into finding a section of the song that I can’t yet play comfortably, and spending time on that section in Riff Repeater.

If I’m past that and if the song isn’t near the limits of my abilities, I’ll work on memorizing the song. Score Attack turns out to be key here: I’ll alternate between Master difficulty, where I’m confronted with playing the entire song from memory, and Hard difficulty, where I can see all of the notes. (And, I confess, I do look at the leaderboards at times; though the leaderboards on Master difficulty are frequently completely empty!) And, of course, sometimes I’ll need to repeat a specific section to memorize it (e.g. if I’m trying to learn a solo); Riff Repeater is my friend there.

No matter what, as I get more comfortable with the songs, I try to improve the musicality of my playing. Are my attacks crisp? Am I letting strings ring after playing them when I shouldn’t? Am I getting the sound I want out of palm mutes, out of fret mutes? How do I want to articulate notes? How tight are my bends?

Eventually, I’ll feel like I’ve hit my ceiling with a given song; I’ll pick one of the hundreds of other songs available to me to replace it in the practice list.

 

And, it turns out: this works really well. I don’t know that I’m the best judge of the quality of my playing, and I’m certainly not going to fool anybody into thinking that I’m a professional guitarist. But I am putting in the time; what I want the game to do is support me in how I’m trying to improve, and it does an excellent job of that. There are specific modes that help me for specific purposes (I’d largely been ignoring Score Attack mode until now, but, once I turned down the sound effects, it actually does a great job in helping me memorize songs), and the game does a solid job of giving me feedback in how I’m performing given the limitations of being software.

To be sure, I don’t spend all of my time in focused practice: one of the reasons why I’m only focusing on four songs instead of five is that it gives me half an hour or so of unstructured time in my play sessions. Sometimes I’ll go through DLC I’ve purchased recently, sometimes I’ll go through favorites in Nonstop Play mode, sometimes I’ll go through my list of previous focus songs to make sure that I still remember them.

 

I could have done this before the Remastered patch, but the patch really does help. Lists are a simple thing, but it turns out to make a difference to be able to distinguish between songs that I’m focusing on (list 1), new DLC (list 2), songs that I’d previously focused on and want to return to occasionally to keep up my memorization (list 3), songs that I’m playing with coworkers (list 4). (I forgot to mention that last category above: I’ve finally started playing guitar outside of game on a semi-regular basis, a few of us at work get together once a month to play.) And, of course, songs that I like (favorites): I’m taking a KonMari approach there, defining a favorite as a song that will bring me joy if it comes up randomly in Nonstop Play.

Riff Repeater has always been there, but now it reliably lets you focus on individual sections, and being able to set the acceleration parameters really helps: a 5% increment is much more useful to me than a 10% increment, and sometimes I even go to smaller increments. Also, I start at a number like 78% instead of 80%, because the jump to full speed always feels larger than other jumps of the same percentage.

And there was a second big patch as well: I don’t use its major feature (playing with an acoustic guitar), but, as part of its improvements to calibration, the sound balance on acoustic parts of songs is a lot better when plugged in, too. More than that, it just makes me happy that the developers are continuing to support the game (and supporting it through updates, not just DLC): I want to keep on playing it indefinitely, and to keep on getting new music indefinitely, so if I can keep on giving them money via DLC purchases and they can keep on supporting the game, that’s an exchange I’m very happy to make.

 

So, with the current iteration, I can structure my guitar playing in a way that is extremely rewarding for me, with the game actively helping in multiple ways. And, in fact, I could do a lot more: in the past, I’ve used Session Mode, Multiplayer, and Tone Designer, they’re all great, and I fully expect to spend more time with them in the future as my focus changes.

This doesn’t mean it’s perfect, just that it’s impressively close. So here’s my current wishlist. First, some straightforward potential improvements:

  • In Score Attack mode, don’t do a hard fail.

Stop counting my score after three failed sections, but let me keep on playing. Score Attack isn’t just useful for competition, it’s the best way to reliably see either all the notes in a song or none of them, both of which are useful for learning songs; and, when I’m using it for that purpose, being forced to stop halfway through is actively counterproductive.

  • In Score Attack Mode, default to the same parameters.

When I finish a song, have my prior difficulty selected, instead of falling back to Easy; and if I’ve selected a different part (Rhythm instead of Lead), leave me in that part instead of resetting the part.

  • Riff Repeater is a little buggy when practicing multiple sections.

Sometimes, when going through multiple sections at the same time in Riff Repeater, it doesn’t show me failed notes from my last play: I’m not entirely sure, but I think it’s showing the failed notes from two plays before? (It’s possible this bug isn’t specific to playing through multiple sections, but I’ve never seen it when only playing a single section.) Also, when I’m playing through the entire solo in Two Princes as a block in Riff Repeater at 100% difficulty, it occasionally decides to lower the difficulty of one of the sections (always the same one, and it’s not even the section that I’m worst at); no idea what’s going on there, but Riff Repeater should never lower the difficulty on me.

  • Riff Repeater and Master Mode.

If I’m in Riff Repeater, I’m by definition trying to learn a section, which almost always means that I don’t feel like I have it confidently memorized. So show me the notes! There is an option for that, of course, and I’d be fine having Master Mode turned off in Riff Repeater: the problem there is that the option affects normal Learn A Song gameplay, so I can’t just leave it turned off, I have to always remember to turn it back on.

  • Difficulty bugs when playing a new song.

Every once in a while, when playing a song for the first time, the difficulty of a section will crash down to zero: it’ll look like it’s set at the normal difficulty, but then when I hit the first note of the section, the difficulty bar will empty out and notes will stop showing up. Now that I realize this is a thing, I pause it right then, enter Riff Repeater, and reset the difficulty to where it should be, but it’s annoying. No idea what triggers this.

  • Load times are a little long.

They’re a lot better than before the Remastered patch, but it still takes a couple of minutes for all of my DLC to be available: surely it’s possible to cache this information? Or does Microsoft require hundreds of network calls to reauthorize all the DLC every time I play the game? (I would hope that’s not the case; and I’m fairly sure I can use DLC without a network connection…)

  • Master mode and varying parts.

This one is more subtle, I’m not sure what the correct solution is, but: if a song has multiple sections that are largely similar but not identical, you qualify for Master Mode on them in lockstep. Which mostly makes sense: it is in fact the case that, if you’re capable of playing / memorizing one of those sections, then you’re capable of playing / memorizing all of them. But the subtle variations between sections mean that you probably haven’t actually memorized all of them: you might have memorized the first version or the most common version but not slight variants.

That would be okay, except that Master Mode then actively gets in the way of memorizing the variants. Say, for example, that there are three linked sections like this. I get the first one wrong, so I start seeing the notes. Then I get the second one right (because I can see the notes), and the third one right (because I can still barely see the notes). And then I go play the song again, and I’m back to the first section: Rocksmith says “you got it perfect the last two times, you clearly don’t need to see the notes”, so it again doesn’t show me the notes for the first section, with the result that I never actually see those notes! (At least without going into Riff Repeater or Score Attack Hard or something.)

This isn’t a theoretical example, exactly that happened to me with the harmonic sections in More Than a Feeling; and I’ve had related problems trying to memorize variants in Sweet Home Alabama or Smooth. And it hurts my playing of variants even if I’m not trying to memorize them: what frequently ends up happening is that I learn one of the variants and play it that way in all of the related sections, Rocksmith says “good enough” and continues not showing me the notes, I don’t get the benefit of seeing the variants, and may not in fact even realize that the variants exist! (At least in Learn a Song; this problem is one of the main reasons I’m using Score Attack more and more, because it’s not vulnerable to this problem.)

I’m honestly not sure what to do here. The best idea that I have so far is, in this situation, enter Master Mode in lockstep in all the sections, but once you’re in Master Mode, decouple the fade level of the sections for non-identical variants. That feels to me like it would be an improvement, but I’m not 100% sure what problems it would lead to.

 

There are some issues with Rocksmith that, I suspect, could best be helped by new hardware, though. I can’t say I understand how much of the note detection is done in hardware and how much is done in software, but the note detection does occasionally have problems, and I would be more than happy to buy a new cable if it would solve that.

Tuning in particular is an issue: it consistently wants me to tune my G string noticeably flat, and when I’m trying to tune the top E string, the displayed tuning is constantly oscillating by about 10 cents, enough so that it can take a little of work for the game to register a tuning in the expected range long enough for it to accept my tuning as correct. Fortunately, once I’ve gotten it to accept a tuning, I can then tune my guitar correctly with a separate tuner, and the game doesn’t ding me for my notes, so apparently it’s significantly more generous when playing compared to when tuning, but still: it’s a pain to have to tune twice, and, when the game says I got a note wrong, I don’t like having a nagging wonder in the back of my head asking whether that was really my fault or just bad note detection.

Also, even when the note detection is correct, it’s slow to respond. On both bends and slides, in particular, you have to leave extra time at both the initial and final notes: otherwise the game will frequently claim that you missed the note. Which is bad for musical reasons, because I want how I play to be governed by what songs good, not governed by note detection; and it’s bad for learning reasons because, honestly, I’m not as good as I would like to be at bending precisely, and it makes it that much harder for me to learn if the game’s detection also has problems, because it muddles the feedback loop.

Maybe I’m wrong about this being best helped via new hardware, though: like I said, I don’t understand exactly what the cable does. And actually, given the new mode added where you can play through a mic, clearly the game is capable of doing note detection in software. So maybe it’s not so much that the existing hardware isn’t good enough but rather that the existing software isn’t good enough, and that, potentially, a solution involving hardware assistance would be better?

 

The other hardware issue is audio latency: latency really is a problem with a standard setup, and it’s worse on the Xbox One than it was on the 360. I’ve got it solved in my local setup, but I’m in a situation where, whenever I recommend the game to friends (especially to experienced guitarists), I have to say “buy this game, but you probably also want this optical audio adapter from Monoprice plus a small amp, so you’ll be able to plug in headphones and get good audio latency”.

Obviously there’s only so much Rocksmith can do about this: if you go through the standard audio chain, there are multiple ways in which latency can get introduced that are completely beyond the game’s control. So, to get good audio, headphones are required. And it seems like the game can make that route much easier: I’m already plugging in a USB device, so can we use that USB connection to send audio out as well as in, adding a microphone jack to the cable? (Plus volume control, either on the cable or in game.) I don’t see why that wouldn’t work…

 

And then there’s one other big use case: getting from “I can basically play all the notes” to “I’m happy with the musicianship of this piece”. This is, ultimately, a human endeavour, so I’m not even entirely sure how much I want Rocksmith to tackle it directly; still, here are some of the concrete gaps that I see in that area.

One is simply being able to review your playing: not reviewing via metric-based measures like the number of wrong notes, but rather listening to your playing and thinking about how to do better. This is the one thing that the original Rocksmith did better than Rocksmith 2014: when you were done with a song, it would replay your performance for you, with the note track visible even if you were in Master Mode. And I remember being frequently surprised when I was doing that how bad I sounded, how much room for improvement it revealed; also, in Master Mode, being able to see the correct notes after I’d messed up was very useful.

What’s going on there is that, when playing a piece, a part of your brain is always going to be focused on the mechanics of playing. When you’re still learning the piece, or if it’s a piece at the limits of your ability, that’s going to consume the vast majority of your concentration; as you master the mechanics of the piece, your ability to step away from the mechanics and try to process the performance as an outsider improves, but even so, it’s extremely useful to be able to mentally switch fully into a critique mode instead of a performance mode.

Another issue that I have is that, when the game reports that I’ve done something wrong, I don’t always know what I’ve done wrong, or indeed whether I have done something wrong at all. So, if it’s not obvious what I’ve done wrong, I go through a checklist: if it’s a bend, did I not end the bend in the right place? Did I not start the bend from the right place? Or did I actually bend pretty much correctly, just not waiting long enough at the start or end of the bend for the game to detect it? If it’s a slide, I have the same checklist as for bends. If it’s a chord, did I not strum all the way through? (In particular, for a three-string power chord, did I strum all three strings, or only two of them?) If it’s a barre chord, did I let all the strings ring, or did I accidentally mute one of them by not pressing down hard enough? Am I so tense that I’m pressing down hard enough on strings to make them go sharp? If I’m getting a bunch of unexplained misses, are my strings getting old and I should put on new ones?

The vast majority of the time, I can figure out the problem by going through this (heck, the vast majority of the time, it’s obvious what I’ve done wrong and I don’t need to go through this); but it’s taken me years to build up the checklist, and there are still situations where, ultimately, I decide that the game is just giving me a false miss. (I have no idea why it frequently thinks I’m playing the chord sections incorrectly on the lead for Planetary (Go!), but it does.) False misses aside, though, it might be nice if the game could tell me what I did wrong? Maybe not, though: in practice, it might be annoying / unneccessary so much of the time as to be a bad idea.

And then there’s the flip side: situations where the game accepts what I’ve done, but actually my performance isn’t great, for relatively concrete reasons. In general, I think the game is right to accept those situations: it would drive me crazy if the game tried to figure out if I’d muted sufficiently or if I’d pulled off a pinch harmonic or what. But there are some situations where I could use a bit more help.

The most concrete of those situations is around rhythm: the game’s notation is designed in a way where other considerations (getting the correct notes, in particular!) are primary but where the depiction of rhythm is relatively imprecise. (Especially compared to, say, piano or violin sheet music.) Also, the game is quite forgiving about rhythmic imperfections in your playing. Both of these are the right choice for the game, but the fact remains: every once in a while a song has a rhythmically intricate bit where I wish that I could just see it written out like in sheet music and stare at a couple of measures, tapping out the beat with my feet and slowly going through the measures wih my fingers until I have the rhythm internalized.

 

Just to be clear: none of the flaws that I’ve listed here are in any way significant. Rocksmith 2014 is, by far, the best electronic tool for learning (not just learning music but learning period) that I have ever seen, so these suggestions are more along the lines of taking it from a 95% solution to a 98% solution. If you’re at all interesting in learning guitar, or even if you’re just at all interested in thinking about how to use software to help learning, then go out and buy a copy. (And, uh, maybe get an optical audio converter while you’re at it if you’re playing on a console.) I’ve been playing it for years and I fully expect to be be playing it for years more.

batch method objects and reducing duplication

May 14th, 2017

I’ve been falling behind in blogging here, but I did write up a note last week on the Sumo Logic blog about something I recently ran into while programming, and Sumo has kindly allowed me to publish a copy here as well.

 

When Sumo Logic receives metrics data, we put those metrics datapoints into a Kafka queue for processing. To help us distribute the load, that Kafka queue is broken up into multiple Kafka Topic Partitions; we therefore have to decide which partition is appropriate for a given metrics datapoint. Our logic for doing that has evolved over the last year in a way that spread the decision logic out over a few different classes; I thought it was time to put it all in one place.

My initial version had an interface like this:

def partitionFor(metricDefinition: MetricDefinition): TopicPartition

As I started filling out the implementation, though, I began to feel a little bit uncomfortable. The first twinge was when calculating which branch to go down in one of the methods: normally, when writing code, I try to focus on clarity, but when you’re working at the volumes of data that Sumo Logic has to process, you have to keep efficiency in mind when writing code that is evaluated on every single data point. And I couldn’t convince myself that one particular calculation was quite fast enough for me to want to perform it on every data point, given that the inputs for that calculation didn’t actually depend on the specific data point.

 

So I switched over to a batch interface, pulling that potentially expensive branch calculation out to the batch level:

class KafkaPartitionSelector {
  def partitionForBatch(metricDefinitions: Seq[MetricDefinition]):
     Seq[TopicPartition] = {
     val perMetric = calculateWhetherToPartitionPerMetric()
     metricDefinitions.map {
       metric => partitionFor(metric, perMetric)
     }
   }

   private def partitionFor(metricDefinition: MetricDefinition,
                            perMetric: Boolean): TopicPartition = {
     if (perMetric) {
       ...
     } else {
       ...
     }
   }
 }

That reduced the calculation in question from once per data point to once per batch, getting me past that first problem. But then I ran into a second such calculation that I needed, and a little after that I saw a call that could potentially translate into a network call; I didn’t want to do either of those on every data point, either! (The results of the network call are cached most of the time, but still.) I thought about adding them as arguments to partitionFor() and to methods that partitionFor() calls, but passing around three separate arguments would make the code pretty messy.

 

To solve this, I reached a little further into my bag of tricks: this calls for a Method Object. Method Object is a design pattern that you can use when you have a method that calls a bunch of other methods and needs to pass the same values over and over down the method chain: instead of passing the values as arguments, you create a separate object whose member variables are the values that are needed in lots of places and whose methods are the original methods you want. That way, you can break your implementation up into methods with small, clean signatures, because the values that are needed everywhere are accessed transparently as member variables.

In this specific instance, the object I extracted had a slightly different flavor, so I’ll call it a “Batch Method Object”: if you’re performing a calculation over a batch, if every evaluation needs the same data, and if evaluating that data is expensive, then create an object whose member variables are the data that’s shared by all batches. With that, the implementation became:

class KafkaPartitionSelector {
  def partitionForBatch(metricDefinitions: Seq[MetricDefinition]):
    Seq[TopicPartition] = {
      val batchPartitionSelector = new BatchPartitionSelector
      metricDefinitions.map(batchPartitionSelector.partitionFor)
    }

    private class BatchPartitionSelector {
      private val perMetric = calculateWhetherToPartitionPerMetric()
      private val nextExpensiveCalculation = ...
      ...

      def partitionFor(metricDefinition: MetricDefinition):
        TopicPartition = {
        if (perMetric) {
          ...
        } else {
          ...
        }
      }

      ...
    }
  }

One question that came up while doing this transformation was whether every single member variable in BatchPartitioner was going to be needed in every batch, no matter which path I went down. (Which was a potential concern, because they would all be initialized at BatchPartitioner creation time, every time this code processes a batch.) I looked at the paths and checked that most variables were used no matter the path, but there was one that only mattered in some of the paths. This gave me a tradeoff: should I wastefully evaluate all of them anyways, or should I mark that last one as lazy? I decided to go the route of evaluating all of them, because lazy variables are a little conceptually messy and they introduce locking behind the scenes which has its own efficiency cost: those downsides seemed to me to outweigh the costs of doing the evaluation in question once per batch. If the potentially-unneeded evaluation had been more expensive (e.g. if it had involved a network call), however, then I would have made it lazy instead.

 

The moral is: keep Method Object (and this Batch Method Object variant) in mind: it’s pretty rare that you need it, but in the right circumstances, it really can make your code a lot cleaner.

Or, alternatively: don’t keep it in mind. Because you can actually deduce Method Object from more basic, more fundamental OO principles. Let’s do a thought experiment where I’ve gone down the route of performing shared calculations once at the batch level and then passing them down through various methods in the implementation: what would that look like? The code would have a bunch of methods that share the same three or four parameters (and there would, of course, be additional parameters specific to the individual methods). But whenever you see the same few pieces of data referenced or passed around together, that’s a smell that suggests that you want to introduce an object that has those pieces of data as member variables.

If we follow that route, we’d apply Introduce Parameter Object to create a new class that you pass around, called something like BatchParameters. That helps, because instead of passing the same three arguments everywhere, we’re only passing one argument everywhere. (Incidentally, if you’re looking for rules of thumb: in really well factored code, methods generally only take at most two arguments. It’s not a universal rule, but if you find yourself writing methods with lots of arguments, ask yourself what you could do to shrink the argument lists.) But then that raises another smell: we’re passing the same argument everywhere! And when you have a bunch of methods called in close proximity that all take exactly the same object as one of their parameters (not just an object of the same type, but literally the same object), frequently that’s a sign that the methods in question should actually be methods on the object that’s a parameter. (Another way to think of this: you should still be passing around that same object as a parameter, but the parameter should be called this and should be hidden from you by the compiler!)

And if you do that (I guess Move Method is the relevant term here?), moving the methods in question to BatchParameters, then BatchParameters becomes exactly the BatchPartitionSelector class from my example.

 

So yeah, Method Object is great. But more fundamental principles like “group data used together into an object” and “turn repeated function calls with a shared parameter into methods on that shared parameter” are even better.

And what’s even better than that is to remember Kent Beck’s four rules of simple design: those latter two principles are both themselves instances of Beck’s “No Duplication” rule. You just have to train your eyes to see duplication in its many forms.

where are the protests?

May 9th, 2017

The day after Trump’s inauguration, a half million women descended upon Washington, D.C to protest, with millions more participating in satellite protests all over the country, even across the world. It was inspiring, and it got attention.

And then the Muslim Ban came, and that inpiration led me and many other people across the country to show up at airports to protest: we won’t sit idly for this. Again, it was inspiring, it got attention, and it was even supported by judicial victories.

At this point, I was assuming that I’d probably be going to a protest once every few weeks for the indefinite future. But, instead: a week or two later, the ICE set up checkpoints, went to immigrants’ houses, went to immigrants’ schools, tore apart families, and mass protests were conspicuously absent. Were we only protesting against the ICE actions when they happen at the wrong place or against the wrong people, whereas we’re fine with the ICE if they’re rounding up Mexicans who have lived here for years without visas instead of Muslims flying in on visas?

The courts did a good job of standing up to the Muslim Ban, even the revised ones. And we celebrated on Twitter, but the celebration felt off to me. Yes, I’m glad the courts are doing the right thing in this instance; but counting on the courts to continue to do that seems not just complacent but foolhardy. And, more importantly: the problem isn’t just that the Muslim Ban is illegal, it’s that it’s immoral; counting on the courts to stand up for morality isn’t a great strategy, either. Instead, we need people standing up saying that 1) this isn’t who we want to be as a country, and 2) if you’re a politician, we are watching your actions very closely.

 

And that lack of protests has continued. There was the March for Science, which was pretty big: but months passed between when it was announced and when it occurred, and the march felt strangely abstract to me, nothing like the Women’s March. (Though I didn’t participate; quite possibly it was different for people who were there.)

Not that active opposition went away: when Trump was pushing his cabinet and Supreme Court picks through, I spent a lot of time trying to get through to one of my Senators’ offices, and I wasn’t the only one. That was something, but even that has gone away for me personally: my Senators and Congresswoman are all Democrats, and even Feinstein seems to have mostly realized that there’s no particular benefit for her to find common ground with Trump. I’d be on the phone or going to townhalls if I had a Republican representative; but I don’t.

 

As I write this, the House passed a horrific health care bill last week, and Trump fired Comey today. I would have gone to a health care protest last weekend; if there’s a protest in support of a Russia investigation this weekend, I’ll be there.

And I hope there is! But I don’t expect there to be one. Or, at least, not the same sort of big, synchronized ones. After the earlier protests, I signed up for some mailing lists about local protest actions; I actually could go to multiple protests a week about all sorts of different things, but constant small uncoordinated protests feel like a way to burn out without any effect: nobody is going to care if 20 people are standing in front of one government office. (At least if you want to have a national impact: if you want to affect the behavior of your local representative, I imagine small, focused protests can be effective.)

 

Of course, it’s not like rabble rousing and coordination magically happens: somebody has to do it, and I’m not standing up and volunteering to do it myself. But I sure wish I saw more of it; I’m getting tired of this feeling that things are going horribly wrong, could get a lot worse, that there are a lot of people who agree with me, and that we have potential energy that we’re completely wasting.

apple music modification times

March 18th, 2017

A few months back, I noticed that my desktop machine was using its drive a lot. (The machine, sadly, still has a magnetic disk; I’m just waiting for new iMacs to be released to replace it.) Poking around, it was the backups (Time Machine and Backblaze, which I highly recommend). So something had caused a lot more files on my machine to get modified on a regular basis; I’d signed up for Apple Music recently, so I was afraid it was doing something, and, sure enough, I saw music file names show up in the Backblaze upload list.

I grabbed one of those files from an earlier backup, in order to compare it with the current version. And, of course, the obvious way to compare two binary files is: open them up in Emacs. Specifically, open them up in two buffers, then do compare-windows.

It turned out that the two files differed in three locations; all those locations were in the first few hundred bytes. So that’s good: at least Apple Music hadn’t replaced my file from some version in their library that they’d decided matched my file, they were just changing metadata. (At least I hope it hadn’t: I don’t have any reason to believe that the file I downloaded was from before I turned on Apple Music, and, in fact, as it turns out below: I have an active reason to believe that I didn’t grab the original.)

 

That raises the question, though: what’s changing in the metadata? I wanted to understand the bytes a little better; so I put both buffers into hexl-mode. Here’s what it looked like:

The first difference is highlighted; one copy has the bytes f2ae01 while the other copy has b212a2. And, actually, all three differences had the same bytes in both the old and new versions.

The other thing that you can see (either in the hexl-mode version or the original version) is that there’s actually some ASCII around there; in particular, before the modified bits, you’ll see the strings mvhd, tkhd, and mdhd. So there are four-character tags in this metadata; if I can figure out what those tags are, maybe I can figure out the meaning of the bytes that changed.

 

After some poking around, I found a QuickTime File Format Specification. Here’s what it says about mdhd:

The bytes after mvhd are 0000 0000 bdfc 5ded d4f2 ae01, with the last three bytes being the ones that changed. Comparing that with the layout diagram, we see 00 is a version, 000000 is flags, bdfc5ded is a creation time, and d4f2ae01 is a modification time.

So the modification time seems like it’s changed. (And, looking up the other two tags, the bytes changed there were also modification times.) As a sanity check, let’s try to decode it. At first, I assumed it was a unix time stamp, but that translates to base 10 3572674049, which doesn’t look like a unix timestamp to me. (Turns out that it would be a date in 2083.)

Looking further in the documentation, it says that the modification time is “A 32-bit integer that specifies the calendar date and time (in seconds since midnight, January 1, 1904) when the movie atom was changed. It is strongly recommended that this value should be specified using coordinated universal time (UTC).” Googling a bit, I found a Mac HFS+ Timestamp Converter which seemed to expect those; 3572674049 translates to Sat, 18 Mar 2017 09:27:29 GMT, while the time in the other file, 0xd4b212a2 = 3568439970 translates to Sat, 28 Jan 2017 09:19:30 GMT.

And that all makes sense: those timestamps must represent when iTunes was last scanning the file for some Apple Music-related reason.

 

Stepping back, though: iTunes / Apple Music is modifying the file to update a modification time; and that results in about a gig and a half of backups happening on my computer every day. And, when I write it that way, that’s ridiculous: maybe don’t modify the file, and then you won’t have to update the modification time?

Of course, Apple Music must be using the modification time for some other reason, some sort of scan time, it’s not a literal modification time. But it would be far better if that scan data were stored in a separate file, instead of modifying the music file itself: on a conceptual level, the music hasn’t changed, it’s just bookkeeping information that has changed, while on a pragmatic level, it causes a ton of extra backups to be generated. I’m mostly noticing it with Backblaze, but the consequences for Time Machine are equally bad: it means that my backup disk gets full with multiple versions of the same music file, so my backup history gets cut off more quickly than it should be.

airpods

March 11th, 2017

I’m not used to people asking me about stuff I’m carrying around, but several of my coworkers have asked me about my AirPods when they’ve seen me wearing them around the office; apparently people are curious about them. So, a report:

They replaced my EarPods, which have been totally fine for me: I’m not particularly an audiophile. (And I’d historically mostly used them for podcast listening anyways, though that’s changed somewhat over the last few months.) So read everything here with the lens of “replacement for Apple’s free pack-in earphones” in mind.

And they’re quite good in that role. They’re the first set of wireless earphones that I’ve owned; and, it turns out, I prefer wireless earphones to wired ones. No more wires to untangle when I’m pulling them out, no more carefully coiling the wires when I put them away in a (futile) attempt to avoid tangling / fraying, no more having the cord occasionally being just a bit too short when I open my jacket, no more occasionally sending my phone or headphones flying when something hits against the cord.

Also, in terms of more subtle benefits of wireless headphones: since I started listening to music at work more, I’d been a little bothered by the fact that it means that my phone is on my desk, or maybe in my pocket, which makes it a bit too easy to check Twitter when I should be working. Whereas now I can move my phone to my backpack while still being able to listen to music.

When reading about Bluetooth earphones in the past, I’d heard that latency is a problem; it is not a problem for me at all with AirPods. I haven’t tried playing video games with timing-specific audio cues, so I can’t swear that there aren’t situations where latency is an issue, but as far as I’m concerned, it’s not worth worrying about at all.

 

So yeah, wireless earphones are good; on to more AirPod-specific stuff. I really like the way it pauses when you remove one of the earpods, I use that all the time. E.g. when I stop by Pamplemousse every morning on my way into work, I pull out one of them while waiting to order, and then put it back in when leaving the store; when the train approaches and it’s loud and I’m going to be reading a book soon anyways, I’ll pull out both of them; if I’m walking towards somebody in the hallway and don’t want to appear actively antisocial, I’ll pull out the AirPod on whichever side they’re approaching me from. The unpause functionality when putting it back in rarely works for me, but I don’t think that’s an issue with the AirPods: unpausing from anywhere other than the app that’s playing sound (from a wired earphone button, from the lock screen, from the control center) hasn’t worked for me most of the time for the last couply of iOS versions. (Not sure if that’s a general iOS bug or a Castro 1 bug or what.)

It’s a small thing, but, when doing the above, the AirPods fit nicely into the tiny pocket in my jeans. (The watch pocket inside the right pocket.) But, of course, when putting them away for longer periods, I put them in the case, and the case is great. I had a pouch for wired earbuds before that I really liked, but I didn’t have any way to replace it, so I’d honestly been a little worried about not knowing what I’d do if I lost it or if it fell apart. But the AirPods case is a very good size to fit into my pocket, which solves that problem. And the case’s battery means that I never have to worry about running out of power: as long as I recharge the case once out of every three or four days and as long as I don’t listen to music for five hours straight at a stretch, then I always have power, and neither of those scenarios is a problem at all. (I already had a lightning cable plugged into my work computer anyways, for the times when I needed to do mid-day phone recharges.)

I gather that most wireless earphones actually have a wire between the two ears; not having a wire helps in a few different ways. E.g. it makes the pause gesture a little easier; it makes it easier to temporarily put them away (or, for that matter, put them away long-term, not sure what the case would be like if it had a wire); it makes it possible to listen with one ear while having the other ear open for external noises. And I’ve never had a sync problem between the two ears.

When reading reviews, I’d somehow gotten the impression that the AirPods magically got audio from whichever Apple device was nearby and playing audio, but that’s not the case: you have to manually select them when changing devices. (Unless you have a Watch, but I don’t.) You can skip the Bluetooth registration step on the different devices, but, while that’s welcome, it’s also only a one-time savings per device. The only weird behavior I’ve had is that, on my home laptop, they don’t reliably show up on the Bluetooth menu; that seems like potentially a pretty major problem, but I feel like I might be hitting a corner case that doesn’t affect most people? (Maybe it has something to do with multiple people being logged into the laptop at the same time? Though, right now they are showing up, even though Miranda is logged in as well.) I hope Apple fixes it soon, at any rate, but fortunately there’s still a wired headphone port on their latest laptops…

I was expecting to miss having builtin volume buttons a little bit, but, in practice, that’s totally fine: I don’t change volume all that often, and I can change it by feel with the buttons on my phone while leaving my phone is in my pocket. I actually miss the hardware pause button more, because of the unpause bug mentioned above, but that’s not a big deal at all. (The tap-to-Siri functionality works, but I rarely use it.)

At first, the price seemed like it was more than I wanted to spend, though not out-of-line for even cheap wireless earphones. The thing is, though, the wires in my wired ones fray every four months or so anyways, so I was always replacing those. (Not an Apple thing, it happened no matter what brand of cheap earbuds I got.) So, as long as these last a year and a half, I’ll break even on price anyways. I was worried about losing them, but at this point I’m not particularly worried about that: they never fall out of my ear, and if I take them out temporarily, the small pocket is a natural place for them. I’m sure I’ll lose one eventually (though Apple is willing to sell individual replacements), or maybe I’ll accidentally put them through the washer or something, but right now, that scenario of them lasting over a year and a half seems entirely plausible.

 

I don’t want to oversell them: ultimately, they’re non-audiophile wireless earphones, so if you’re the sort of person who wasn’t using Apple’s default earphones before, you might not like these either. But, accepting that constraint (and accepting that you’re going to have to spend $160 for them and, as I write this, wait a month and a half for them to be shipped): they really are well done.

tokyo mirage sessions

March 7th, 2017

Tokyo Mirage Sessions ♯FE is by the Persona team; it’s effectively a lighter-weight Persona game (plus a very light dusting of Fire Emblem), with an idol plot. And it is amazing.

I’m trying to remember what in the game first made me sit up and take notice. I have to think that it involved Kiria: maybe when she shows up and saves you at the end of the introductory dungeon, maybe when you see her first concert performance? The game characterizes her (initially) as the embodiment of cool; I’m not going to argue with that, but the message that I took from that was that this game takes a lot of care about style and presentation, with excellent results.

People who know me in person might be a little surprised to hear me say that I care about style and presentation: I am unstylish in a completely stereotypic white male programmer way, never deviating from my uniform of slightly shabby blue jeans and a solid-color shirt, with the extent of my color coordination being whether I wear a pink hairband (if my shirt is black/grey) or a black hairband (otherwise). And I’m sure that there were times in my life when I wouldn’t particularly have cared about the style that games present, when I might have even been actively disdainful towards it.

I’ve changed, though. My current attitude: style is a form of expressiveness, a form of art, and, as such, is a wonderful thing. It’s not an art form that I actively explore in my own personal life, but that doesn’t mean that I don’t appreciate it: I don’t actively explore drawing or painting in my own personal life, but that doesn’t mean that I don’t enjoy going to art museums. It’s an art form / area of expression that I’m relatively ignorant about (which, actually, isn’t too different from painting, but at least with painting I have been to a decent number of art museums), but I’m at least aware enough to sit up and take notice when I see the way Tokyo Mirage Sessions uses music, dress, environmental design, Tokyo itself, and cel-shaded graphics. (And, incidentally: why isn’t cel-shading a lot more common than it is? I honestly don’t understand why some form of cel-shading isn’t the default for games that actively care about appearance.)

 

So yes: the most stylish game I’ve played in, uh, potentially ever? (Hmm, I guess Jet Set Radio and Katamari Damacy give it a run for its money, but still: there aren’t many games I’d compare to those two.) I would say that that level of style isn’t too surprising, given the game’s idol theme; but, comparing it to Love Live, another idol game, it’s like night and day: so much better done in Tokyo Mirage Sessions. But style isn’t the only thing going on here: the game is also grounded by an underpinning of joy.

The game thematizes that joy as cuteness (e.g. in Kiria’s evolution over the course of her side stories); certainly there’s cuteness present, and it’s well done. But there’s more to it than that. It’s the way that Mamori, as cute as she might be, is also fundamentally a good person who cares about others and who brings out a reaction of others to care about her; it’s about the way that Tsubasa leans into her insecurities, works to master whatever she’s afraid of, and burst out with a performance that is glorious partly in an expression of technique and style, but also partly in an expression of the joy of showing that you can do something, and also partly just the joy of being the good, shining person that Tsubasa is.

 

It’s the joy of the stickers; or, for that matter, the non-joyful range of emotions of the stickers. The game has a messaging platform that it uses to push the plot forward; like most modern messaging platforms, it has stickers. Which isn’t something that I’ve experinced personally: I’ve never used Line, and while I’m aware that iMessage has stickers these days, I’d never investigated them personally.

Tokyo Mirage Sessions has stickers; they are adorable, with the different characters having their own sets expressing their personalities. But they’re not just adorable, they really make a difference: seeing a conversation end with, say, the “Tiki is worried” sticker has an impact that words alone don’t.

This is, I’m sure, not a surprise to most people reading this: I realize that I’m behind the times in my use of messaging platforms, and that I also spend time in my own head in a way that makes me oriented towards words instead of pictures. (I appreciated the way Yashiro presents that sort of person in the game.) I’m a convert to stickers now though, or at least I’ve started using them some; sadly, there isn’t a Tokyo Mirage Sessions iMessage app, but it turns out that iMessage apps are really easy to write, so now I can send those stickers myself! (Sadly, they’re too small to work well as Slack custom emoji, even when jumbomoji…)

 

It’s even in the joy of the combat. The combat will be entirely familiar to any Persona player: they added in the Fire Emblem weapon triangle, and even the lead character only has access to one of the game’s equivalent of personas, but the individual combat is otherwise essentially the same, down to the names of spells. (And personally I think both of those changes are improvements over standard Persona combat.)

But the combat has a little more flair, a little more style: it’s thematized as taking place on a stage, with a cheering crowd and big pictures of your team on the side. And yes, a little more joy: there’s something something cheerful about the way that the team members do their attacks, especially the way that they do follow-up attacks.

I haven’t always been a fan of the way that, when JRPGs moved to the third dimension, they added in animations for attacks, potentially even rather lengthy ones for special attacks. But somehow Tokyo Mirage Sessions pulls this off in a way that kept me watching the animations until the very end of the game, even as the attacks get longer.

Because they really do get long: you unlock an ability for your team members to do follow-on “session” attacks if you attack an enemy’s vulnerability (which each team member can almost always do): so each attack quickly turns into a trio of attacks, and then, halfway through the game, non-primary party members can join in, so you get up to seven attacks. And then, as you complete side quests, every once in a while, two team members will put on a special joint performance, which will allow the chain to restart.

The game is reasonably thoughtful about this from a “waste of time” point of view: for most enemies, there just aren’t that many interesting choices, so having you defeat weak groups of enemies with a single attack plus a chain of followups jumping from enemy to enemy makes them less tedious, and having you defeat enemies that are just under your level with one attack plus a chain per enemy also works well. And that sort of respect for the player’s time is important: but equally important is the way the chains come off as a bunch of skilled performers who enjoy showing off their craft and riffing off each other. The joint performance animations really do take a little while, but they don’t show up that often, and they include some of the single cutest animations in the entire game.

 

Those chain attacks and joint attacks, in turn, point out the final aspect of the game that makes it so special: its focus on teamwork and companionship. When writing about Persona 4, I mentioned that one of the things that I liked about that game is that it presents your superpower is being a good friend and collecting good friends. And Tokyo Mirage Sessions does something very similar: you join a production company, and apparently you do get jobs as a singer / dancer / actor, but you’re never a star in that regard: your party members are the stars, they’re the ones whom you see in videos, on posters, on magazine covers.

But what you do is help them grow, help them become better. At the start, your role is more one of encouragement, of literally providing courage to Tsubasa. But, as the game goes on, you get deeper: you turn into a set of eyes that can coach people, and you even help your team members learn from each other as well.

I remember wondering halfway through the game when the protagonist would get his own music video, and then (once I thought about the question) realizing that the answer was “never”, because you’re not a star. But then, when the credits rolled around, I heard a song over them with an unexpected voice, and realized: finally it’s your turn.

Which made sense in so many ways. On a basic level, it makes sense to let the protagonist sing the last song that you hear, the one that brings the game to a close. On a thematic level, though: the song is playing over the credits, which means that, while you’re listening to it, you’re seeing the names of all of the people who worked together to bring the game into existence, so you want the singer who is the ultimate manifestation of teamwork. And, on a musical level: the voice isn’t the voice of a star, there’s nothing ostentatious about the song and it’s not clear that the singer would be well suited to a more ostentatious song. But it’s performed well, you have no trouble imagining it as being done by somebody who is professionally successful in a more background/supportive role.

And, emotionally: there’s something about that last performance that makes me just feel like I’m at home. I’ve listened to the soundtrack a bunch of times, I like all the songs on the soundtrack and I like some of them quite a bit, but much of the time I think that that last song is my favorite song on the soundtrack. It’s the one that lets me relax, feel like I’m part of the family, and just be happy.

 

This game and Persona 4 are my favorites from the games that I’ve played for the first time this year; Tokyo Mirage Sessions seems like it should be a minor side project, but, as far as I’m concerned, it’s some of Persona Team’s best work, and I personally think it’s better than Persona 3. From a game mechanics point of view, they’ve made intelligent choices about what to keep and what to refine: I prefer this version of the dungeon exploration, the combat mechanics, and the leveling mechanics. The studio has always been stylish, and with Catherine we saw that perhaps starting to come to the fore a little more, but the evolution that Tokyo Mirage Sessions shows in that regard is significant. (And that combined with the footage I’ve seen of Persona 5 is making me very optimistic about that game!) Most importantly, the emotional grounding of this game is real and deep.

Having said that, Persona 4 (and Persona 3, for that matter) also has its own virtues that Tokyo Mirage Sessions doesn’t show as strongly. Those games are built around a calendar, with a corresponding focus on daily life and on small-scale, intimate situations. Not that the story missions in Tokyo Mirage Sessions don’t provide intimacy: on the contrary, they absolutely do present you with your team members in vulnerable, honest situations. But it’s different from seeing them in school day after day for month after month, from going to the same streets and stores over and over again. And, also: a game built around a team of idols is going to be different from a game whose heart is an elementary school student who has lost its mother.

So yeah, Tokyo Mirage Sessions doesn’t have the same type of emotional texture that Persona 4 does, and you could make a case that it loses something with its focus on a group of people who are larger than life. (Though that is only a difference compared to a Persona game: it’s entirely in character with the anointed-savior-of-the-world plot of most role-playing games out there!) But, if you accept that premise: it does what it does very well indeed, with flair and style, with joy, and, ultimately with love and caring.

mini metro mario run

February 26th, 2017

Mini Metro is a good game. Interesting mechanic, simple but with (I suspect) a decent amount of depth, and (at least on the iPad) an interface that works extremely well with the gameplay.

Super Mario Run is also a good game. Despite being a one-button game, it is unquestionably a Mario game; and the levels are interesting enough the first time, and have two separate mechanics (the colored coins, the “compete against ghosts” mode) that encourage you to really master those levels.

I’m not currently playing either game, however, and I’m not entirely sure why. Maybe it’s just that I don’t like either of them quite enough to compete with my gameplay time? (Possibly true for Super Mario Run; Mini Metro seems like it should make the bar, though.) Maybe I’m not spending quite as much time playing games as I used to? (I’m actually not sure if that later statement is true or not; I certainly spent enough time on Tokyo Mirage Sessions, though…) Maybe I don’t like clean little games as much as I would like to pretend I do? (But I spent a lot of time playing Imbroglio last summer.) And: why am I still doing various Conceptis puzzle games (Fill-a-Pix in particular), when Mini Metro seems like it could plausibly be as evergreen? (I was also still playing Love Live for a long time, even though I think it’s a much worse game than Mini Metro, but that’s a very different game, and one which has very low-weight “return every day for 30 seconds” mechanisms.)

 

Comparing Mini Metro to Imbroglio, the latter game had the advantage that it gave me staged challenges to learn the game. I’d try out a new character, introducing not just a new ability but also unlocking new weapons; I’d work to get a score of 128 with that character, while getting a feel for the new weapons and designing a board of my own; I’d then try to push my board further, refining it in the process; and then I’d move on to another character. So I always had something relatively concrete to learn, and something that wasn’t a big step from what I’d been doing before: it’s designed to actively support deliberate practice. With Mini Metro, in contrast, there aren’t the same small steps: it’s easy to unlock all the cities, the differences between the cities don’t seem significant enough (at least at my skill level) to make me feel like I’m learning from the different cities in the same way I was learning even from the different character abilities in Imbroglio, let alone the weapons. So I instead have to set a challenge of getting a certain number of passengers; that is indeed a challenge, but it doesn’t have the same scaffolding.

Which isn’t a bad thing: it’s no more scaffolding than Flight Control HD had, and I loved that game. I think the other thing that’s going on with Mini Metro is the play sessions: they require you to concentrate and be ready to respond, not with the same level of constant attention as Flight Control but for a longer duration, maybe 10 minutes or even 15?

And that’s not a bad thing at all, either: I think the game does a real balance between giving you time to think and consider the bigger picture while forcing you to make decisions regularly. So I really appreciate that. But, at the same time: that length means that it can’t fit into quite as small chunks of time as some games do; and the active nature means that it’s probably not the best game for me to play right before going to sleep, because it will be a little hard to go to sleep. Also, comparing it to the Conceptis games, it doesn’t feel right to pause Mini Metro (and I don’t even know if pausing is possible): a hard Fill-a-Pix board may take hours for me to solve, but I can spend those hours over multiple days.

 

So the upshot is: if I want to play Mini Metro for long enough for me to really get a feel for its depths, then I’ll have to commit to it: it’s not fighting with other iPad games for space, it’s fighting with narrative games for space. And it’s not quite managing to win that fight. Though, now that I think about that: Super Mario Run would fit into my iPad spaces. So that’s not the only thing going on there: I think the other issue is that, once a game qualifies as one that I want to spend time with in random iPad moments, then it can take up part of that space for quite a while, making it harder for competitors such as Super Mario Run to dislodge it.

At an rate: games I’m happy to have played. I may well even return to one or both; and, if not, they’ve at least taught me something about what I value spending time on in practice.

fingers on scales

February 16th, 2017

A couple of months ago, I ran across the paper “The Moral Character of Cryptographic Work”, by Phillip Rogaway. It’s a very good paper; I encourage you all to read it, instead of this blog post! But, for those of you who are still here: the question there of the political implications of nominally apolitical subjects (the pure math underlying cryptography, in that case) reminded me of problems that I’ve struggled with a few times over the years.

For example, I don’t want to have anything personally to do with the military. I’ll accept that a country the size of the US needs a military and I have respect for (most) people who serve in it; but my belief is that, over the last half-century or more, the US military has been the aggressor far more often than the defender, that’s it’s done much more harm than good.

I write that I don’t want to have anything personally to do with the military, but the truth is that I’ve accepted military money, and more than once. In the summer after my freshman year of college, I worked at a military contractor; I don’t remember enough details about the funding of the project that I worked on to be sure, but I assume that my paycheck came straight out of the DoD. I justified it in that the project was, on the surface, in no way militarily-focused, it was applicable much more broadly: we were working on a system for automatic verification of computer programs.

And, for grad school, I got a scholarship from the military; they gave me a slightly higher stipend than the NSF would have, and they didn’t impose any requirement for me to work with them, so I figured, it wouldn’t change my actions in any way, why not take their money?

Later on, I applied for a job with a company that worked on free software; in particular, they worked on GCC (the GNU C compiler), and one of their clients (if I’m remembering correctly) was the Lawrence Livermore lab, who wanted improvements to GCC to help numerical simulations used in their nuclear weapons work. The idea of helping nuclear weapons work squicked me out enough that I withdrew from consideration for the job (and I have no reason to believe they would have hired me). Of course, it helped that some of my other job leads seemed to be turning out well, I’m not at all confident if I’d have made the same decision if my job search had been running dry.

 

I’m not saying that, even for a pacifist (which I don’t necessarily consider myself to be), any of those three examples would be situations where military collaboration would be immoral. In all of the situations, the work was in no way of a strongly military nature: I’d have ended up doing the same sort of pure math Ph.D. no matter my funding source, and making an open-source compiler even better is a positive good in the world! So they’re different from the cryptographic examples in the article I mentioned: they’re different from doing cryptographic research within the confines of the NSA that will never see the light of day, but also, setting the NSA aside, effective cryptographic results are either going to help you keep secrets or help you uncover secrets, and in neither case is it morally neutral. If you’re working in a field like cryptography, I would certainly recommend thinking hard about what you want to do that work in service of.

But that lack of direct military impact is exactly what I want to talk about. We’ve evolved into a society that is very good at giving powerful institutions what they want: institutions find what level of collaboration a given person is comfortable with and convince that person to do exactly that much collaboration. Continuing with the military example: if you want to kill for your country, they’ll hire you to do that. If you would rather not kill but are okay directly supporting those who do, they’ll hire you to work behind the lines. If you don’t want to work in the military itself but you still support the institution or are neutral to it, then there’s a job for you as a military contractor. If that sort of active support makes you feel uncomfortable, the military will still pay for and benefit from work on technologies that are useful to but not specific to the military.

Ultimately, this collaboration diffuses down to the level of working for a company that manufactures nails that are sold on the open market with the military as one of its buyers, or to paying taxes with some of that tax money going to the military. Renouncing collaboration at that level means giving up both a huge number of personal benefits and societal benefits to other institutions; it’s a rare person indeed who would avoid that sort of collaboration, whether because of a desire to avoid the personally unpleasant consequences or because of an active desire to affirm the broader virtues of a society where nails are available on the open market and where taxes support programs that make our society as a whole stronger. But, even accepting (as I certainly do) those last points, it remains: powerful institutions are capable of putting their fingers on the scales to tilt society in their direction at many levels.

 

I work in Silicon Valley: sometimes for tech startups, sometimes for larger companies that have acquired those startups. And I do believe that Silicon Valley brings a lot of benefits to the world; but, following the above reasoning: there’s also a vast amount of money sloshing around, and it is inconceivable that that money isn’t being used to tilt the scales in directions that benefits those who control the money. There’s a level of indirection beyond the “direct military funding” example, though, so it’s a little less obvious what ends I’m working in service of.

I mean, with some companies in the valley (and none of these examples are ones I’ve worked at), the moral linkages are obvious: Palantir, for example, is creepy as fuck, they’re not even trying to hide that, just look at their name! Uber is a little more subtle: they’re consciously placing themselves in the middle of a bunch of really important societal shifts, shifts that are large enough that I don’t feel like I can clearly see which ones will end up with us in a better world and which in a worse world, especially in a medium-to-long timescale; if Uber weren’t so obvious about not caring about democracy or workers, I might wonder about whether I’d be interested in working for them, but, well, they are pretty obvious about both of those points. Facebook is a step further into uncertainty: connecting people is good, getting information that’s interesting to you is good, except that a filter bubble is bad, and monopoly power over certain classes of interaction and information access is bad.

And then, taking one step further away: setting aside the question of what companies do, there’s the question of whose pockets you’re putting money into by helping those companies become valuable. I certainly wouldn’t want to work at a company where somebody who thinks women’s sufferage is harmful is in a position of leadership (which rules out two of the companies in the previous paragraph); I don’t know exactly where I draw the line with him, but I’m very glad that the company I work for doesn’t have Founders Fund as an invester, and I hope it stays that way. He’s not the only prominent Silicon Valley VC whom I find abhorrent, though; it may be that finding a valley company that’s funded by ethical VC firms isn’t any more possible than buying your gas from an ethical oil company. Or maybe that sort of nihilism is exactly wrong, in that it encourages us to give up rather than trying to make ethical choices!

 

Looking a little more broadly than Silicon Valley: as I write this, we’re getting an object lesson in how many people in the country support white supremacy, support Christian supremacy, support patriarchal supremacy. Those are powerful institutions, putting their fingers on the scales of society in countless ways.

And I qualify on two of those three categories; I’m sure that has given me huge benefits over my lifetime, much more than I’m consciously aware of. I’m also sure that I’ve both implicitly and, at times, explicitly supported the wrong side of those positions; I’ve also explicitly worked against them at times, don’t get me wrong, but still.

With something as deeply woven into the fabric of our society as, say, patriarchy, it’s impossible to not be compromised in countless ways, even if you want to do the right thing. I think that men and women should be compensated equally, and I also recognize that they aren’t. That doesn’t mean that I volunteer to give up part of my paycheck, it doesn’t even necessarily mean that I should volunteer to give up part of my paycheck, but it does mean that I shouldn’t pretend to be confident that I’m “earning” everything I get: if I were a different but equally capable person, the chances are that I wouldn’t be getting the same compensation. The chances are, in fact, that I wouldn’t be doing what I do at all: looking around at my last several jobs, it’s abundantly clear that society is pushing men and women in different directions. I would like to pretend that I’ve gotten where I am out of merit, and I do actually believe that I’m a good programmer, but still: clearly I am and have been for decades competing on a playing field that benefits me. And it’s not clear to me either what the ethical responses are to this situation or whether I personally am willing to accept the consequences of those ethical responses.

 

So: powerful forces have their effects everywhere, at all levels. They figure out where each of us individually have our limits and then push us in their favor towards those limits. And just being aware of the extent of this is very difficult, let alone navigating it at all successfully.

Fortunately, as the protests over the last month have shown: there are powerful forces working in favor of the many, not just in favor of the few…

someone is wrong about apple on the internet

January 20th, 2017

Random thoughts kicked off by the new MacBook Pros (or, really, by people’s reaction to them):

  • It was really weird to see how strongly people reacted to the 16GB memory limit.

I totally get being disappointed that 16GB is the cap: that felt low to me, too. But (and I wish I’d saved links) the reaction seemed much stronger than that: that a 16GB cap means that this isn’t a pro machine, that it’s impossible for professionals to get work done in 16GB.

And that’s ridiculous. I’m a professional; I use Macs for work; all of my Macs have 16GB of memory. And, given that Apple hasn’t released a laptop with more than 16GB of memory, there are plenty of people serving as existence proofs that professionals can use a 16GB Mac.

  • The shape of Apple’s laptop line

Then, once it turned out that the reason for the limit was because of chipset limitations, the reaction changed: some people said “fine, I guess Apple made a good tradeoff”, and other people said “pros want a powerful machine, so Apple should have released something heavier and/or with less battery life so that we could have gotten more memory”.

Which is something nice to wish for in an alternate universe, but I can’t imagine it happening in this one. Apple’s laptop strategy is clear and consistent: they want to have two models, a cheap and light one and a more powerful but still pretty light one. Sometimes (as is happening now) they go through a transition period, where they introduce a new, lighter model that starts out in the middle and then, as the price drops, replaces the bottom one; we saw this with the Air replacing the plastic MacBook, and we’re seeing this now with the non-Pro MacBook being introduced in the middle but being named in a way that makes it clear that Apple intends it to replace the Air once its price drops enough. (The lack of retina Air models, or of Air updates at all, also makes the intended transition clear.)

Apple isn’t going to introduce a third, even-more-pro level; Apple isn’t going to make people who can’t fit in the skinny MacBook’s constraints use something fat; and it’s abundantly clear from their OS work over the last few years that battery life is a priority for Apple.

Also, they only update the body for their machines every few years. So, if they make the machines large this year, that will affect them for years to come, even after chipsets have been released that allow them to use more memory at low power draws. Again, it’s no surprise that Apple is going to choose a tradeoff that makes the machine a little underpowered now, growing into something entirely adequate in future years: we’ve seen that play out before.

  • The dangers of single suppliers

Even though it’s no surprise that Apple made the tradeoff they did, that doesn’t mean that other people should prefer that tradeoff. It happens to be one that I’m personally happy with — 16GB is fine with me, while more weight is not fine with my back — but there’s no reason why everybody should have the same priorities as I do.

Disruption theory warns about the dangers of overserving; part of me wants to say that some of what’s going on here is that Apple is being smart by not giving into the temptation being led into overserving by following their most profitable customers. Apple has always been a weird case for disruption theory, though: they stay in the high end but remain in touch with enough people that they can continue to grab large profits because of both their volumes and profit margins. I wish I understood what was going on there, but it feels to me like they probably understand something about product placement / stratification that basically nobody else does, and I suspect that staying a bit away from the high end is part of that.

But that choice leaves those best customers frustrated. In most circumstances, that would be fine, another company would spring up; that’s harder in this case. I do wonder if enough people will flee to either Windows or Linux to make a difference in the medium term, though.

The other situation in which a single supplier is causing problems here is Intel, with their chipsets. I’ve mostly ignored the ARM Mac rumors before as not relevant any time soon, but now I am wondering if they’ll come to pass sooner rather than later: the ARM chips are catching up very quickly in power, and if Intel is causing active problems for Apple, then maybe Apple really will jump ship in a few years?

I also wonder if Apple can stick with Intel CPUs while designing their own chipsets. I heard something about there being licensing issues that would prevent that; I’m not sure if that’s true, and, if it is, what might be the ways around that. (Can Apple twist Intel’s arm enough? Do a deal with AMD? Buy AMD?)

  • The role of the laptop

What do we want out of a laptop? The current consensus vision is: we want to have a single computer which we use wherever we are, doing whatever we want on it. If we’re at work, we’ll plug it into a monitor (and maybe connect a keyboard, a mouse, potentially other devices, or we’ll take it with us as we go from meeting to meeting. If we’re at home, maybe we’ll also sit at a desk and plug it into a monitor, maybe we’ll sit in a comfortable chair and have it in our lap. If we’re in a coffee shop, we’ll plunk it on the table next to us.

Depending on the kind of work we do, the kinds of environments we like spending time in, our modes of transportation, and just our personal preferences, we’ll value aspects of the laptop differently: maybe we’ll want more compute power, more storage, more battery life, less weight, a smaller size, a larger screen. But modern laptops are, year after year, reducing these tradeoffs: these machines are very powerful, very light, have a battery that can last all day, and a lovely screen. Not powerful enough for everybody, not a big enough battery for everybody, and so forth, but really: laptops these days are great for lots and lots of people in lots and lots of situations!

At the same time, though: networks get better and better as well, as do the services available over those networks. So why worry so much about having a laptop as a single machine that you can do everything on? You can reach the same files anywhere with Dropbox, your e-mail is stored in Gmail, you spend huge amount of time browsing the web, and AWS is happy to provide vast computing and storage resources for you. With that lens, it’s less clear that it’s important to focus on the power of your laptop.

To return to the question of wanting more memory: for me, personally, lots of the situations where I would want more memory are situations where, honestly, I’d be just fine doing the task in question on a Linux server somewhere. It happens to be the case that, much of the time, my laptop is good enough; but if it’s not, it’s not particularly clear to me why I would prefer a more powerful laptop over spinning up an AWS instance. (Or, for that matter, over putting a generic desktop machine under my desk and installing Linux on it.) The laptop is a nice interface to that compute power, but that doesn’t mean the laptop has to host that compute power.

Maybe I’m eccentric in that regard; but I imagine that a lot of programmers feel the same way, as do a lot of people who are, say, doing scientific computing. I’m sure there are people out there who need to do tasks that require a rich graphical interface that’s colocated with significant compute power (people who spend lots of time doing video work, maybe?); I also bet that there are lots of people (including me, honestly) who work that way mostly out of habit, though, or because of tooling limitations.

I’m pretty sure that most of the people who are talking about how awful the new Macs are will stick with them; but it wouldn’t surprise me if a non-negligible number ended up in a split world between a Mac and a Linux server (or servers) somewhere. And I suppose it’s possible that enough will go to Windows to make a difference; I’m glad that Microsoft is trying out some interesting hardware ideas.

  • Apple’s commitment to the Mac

Another phase of the reaction: the new laptops are a sign that Apple doesn’t care about the Mac. I don’t see how to square that claim with the existence of the Touch Bar: nobody was asking Apple to create the Touch Bar, it required real engineering effort at a hardware level, they put in the work to add support across a wide range of their applications, and apparently the API is well-done as well.

That doesn’t mean that the Touch Bar is a good idea! It just means that Apple spent significant engineering effort on this Mac, effort from a wide range of teams, and effort that they could easily have avoided spending.

I’m not saying that Apple cares as much about the Mac as they do about the iPhone: it’s much more important to them to make a splash with a new iPhone every year than to constantly improve their Macs. But there’s a big difference between “iPhones are more important to Apple” and “Macs aren’t important to Apple”.

  • Mac desktops

The above arguments mostly settled down to a general feeling that the new laptops are okay, but that we still need thinkpieces about the Mac being doomed, because Apple clearly doesn’t care about professional needs for the desktop.

And it’s not so clear to me how that will play out in 2017. I’m almost positive that those worries are significantly overblown, if for no other reason than that articles about Apple being doomed are 1) frequent and 2) always wrong; but I can’t see the details. And it’s certainly the case that the Mac Pro raises eyebrows: I can’t imagine that Apple’s plan when they launched that machine was to leave it basically untouched after launch for three and a half years and counting. So something changed their plans, and I don’t know what caused that change or what their new plan is.

I am sure that Apple still cares about Mac desktops: the most recent iMac iteration is apparently a pretty glorious machine. And it’s a machine that’s good enough for the vast majority of users, like the new MacBook Pro; but desktop machines should have a higher ceiling than laptop machines, and the iMac has a lot of headroom above it.

Certainly something more powerful will come in 2017: if nothing else, an iMac that supports more than 16GB of memory! (And presumably with an external keyboard with Touch Bar, though I don’t know for sure if Bluetooth has bandwidth constraints that would make that unworkable.)

I kind of feel like something else is coming, though. Above, I claimed that Apple has historically had exactly two classes of laptops, one for most people and one higher-end one; they’ve also generally had that for desktops, an iMac and a Mac Pro. And I like symmetry concerns, which means that I expect that to continue to be the case!

Except that, for years now, they’ve actually had three desktop lines, with the Mac Mini in the mix as well; and, unlike the three laptop lines, it hasn’t been a temporary measure caused by phasing out old models and phasing in new models. So their product lineup hasn’t actually been that symmetric in the past; and symmetry, while nice for mathematicians, isn’t necessarily the best business strategy anyways.

I think that the Mac Mini is on its way out: its use cases have largely either gone away or been satisfied by the Apple TV or been satisfied by NAS devices. Really, the important gap is at the high end, not at the low end.

But it’s less clear to me that there’s a reason for physical distinction in desktop machines the same way there is for laptop machines. And the main reason for a physical difference at the high end would be to allow expandability / user replacement of parts, which is something that Apple has been steadily moving away from over the last decade.

So I can see Apple saying that the iMac form factor is good, and just putting in more options for high-end components there; that probably seems like the most likely option to me? (Possibly branding some configurations as an “iMac Pro”, but with the two versions as much more of a single continuum than their laptops are.) I guess the second most likely option to me seems to go back to an expandable Mac Pro: it’s going in an un-Apple direction, but it does provide a clear justification for a split between two lines of desktops, and clearly there’s something about their non-expandable Mac Pro that didn’t turn out the way they hoped. The third option would be a new model of non-expandable Mac Pro (with a promise that this time it will be different?); and the least-likely-sounding option to me is for them to say “the iMac at basically it’s current power level is fine, we’re not going to try to make desktop computers for the 2% or 5% or whatever of people who want something more”.

I’m really not sure, though: with the laptops, there have been lots of examples over the last few years about the direction Apple is going in and what’s important to them, while with desktops, there have been many fewer examples. And, well, it’s harder to make predictions about the future than about the past.

  • The retina transition

I expected the retina transition to be quite smooth, after observing the first couple of examples; but it sure hasn’t been for Macs, with Apple still not offering a cheap retina laptop, with the Mac Pro mess, and with them only now having a good solution to plug a laptop into a retina monitor.

So: I underestimated the ease of the retina transition and the importance of the bandwidth that ports provide. And that is one possible story behind the Mac Pro stagnation: Apple was unwilling to invest in improvements to machines that didn’t support retina displays, and it took longer than they expected for a suitable connector to appear? Actually, now that I type it, that seems pretty plausible: maybe my third scenario (new non-expandable Mac Pro) is more likely after all, with Apple claiming that, with USB C / Thunderbolt 3, everything will be wonderful and they will continue to improve it. Not sure the Mac Pro audience will trust Apple if they make that claim, though.

(And, while I’m on the subject of ports: anybody who was surprised about the ports on the new MacBook Pros hadn’t been paying attention. Though I don’t think too many people were surprised, even if they were complaining.)

  • The machines themselves

So: the machines seem like they fit pretty well into what you would expect from Apple. My home laptop was quite old, so I’d been waiting for new models to be released so I could replace it; I got a 15" base model MacBook Pro.

And, indeed: it’s a great machine, in basically unsurprising ways. I liked my prior laptop (a six-year-old 17" MBP), but I certainly prefer having a machine that’s significantly lighter and has a significantly nicer screen. One weird thing about the screen configuration: the default logical resolution the OS selects is a little higher than half the physical resolution. Honestly, it looks totally fine at that resolution, I didn’t notice pixel artifacts, but when I set it to exactly half the physical resolution instead, the larger text size made my aging eyes noticeably happier. At any rate, I appreciated the higher physical resolution giving me the flexibility to change logical resolutions.

The Touch Bar is a sign that Apple cares about the Mac, but that doesn’t mean that it’s a good idea! And I’m still not completely sure what I think about the Touch Bar: numbered function keys do seem a little silly, but only having four function keys worth of operations available by default is a little low, and Safari in particular doesn’t provide useful functionality with the portion of the Touch Bar that it takes over for its own usage. (Not that I have a better idea about what Safari should do with that space.) The nonphysical escape key is totally fine, I can reliably reach it without looking, and Touch ID is great.

I was a little worried about the keyboard, but, once I’d spent a little time with it, I turned out to fall into the class of people who like the keyboard. In fact, I’m pretty sure that it’s now my favorite keyboard: I really like how little pressure you need to type on it. (So now I want all of my other machines to have that keyboard, so my fingers can really get used to it!) The big trackpad is great too, the fake click is magical, and (once I got used to the fact that I could now easily click with a finger instead of a thumb) I appreciate how little effort the click takes. I’m not entirely sure what I think about having two depths of click: it tripped me up several times when I was getting used to the trackpad, and while that doesn’t happen to me now, I’m not yet getting value out of the two levels that justifies the complication. Maybe I’ll like having the second level as I use it more?

I was excited about the True Tone display on the iPad Pro; I was expecting it to show up on all their new products, I was disappointed it wasn’t on the iPhone 7 (and I probably would have bought an iPhone 7 if it weren’t for that, actually), and I’m disappointed it’s not here. Which is another way in which I have to update my mental model of Apple’s behavior: it seems like the sort of improvement that would spread quickly everywhere, but now we have two existence proofs that that’s not the case. Not sure what’s going on there.

The USB C Thunderbolt 3 ports are great; I’ve only seriously used them once, but I was impressed how quickly I was able to use them to do a full-disk clone. Though, admittedly, I’m not sure how much of that is surprise the speed of the port and how much is my expectations being set by old hardware. I was expecting to miss MagSafe, but I haven’t particularly; I do miss the orange/green light on the charger a little bit, though. And I am not impressed by them not including a long cable from the power outlet to the brick in the box; the current Apple does seem a bit cheap in how they nickel-and-dime you.

I’ve run into a few more bugs than I would have liked. My initial restore when setting up the machine ran into enough problems that I had to bring it into an Apple store to reset it to factory settings: clearly Time Machine over the network isn’t as reliable as it should be. (I’ve since added SuperDuper as a backup option.) There were a few OS crashes; they’ve mostly or entirely gone away after the OS update, though. At first I was worried about battery life, but now the battery life is great; not sure if the problems were caused by OS problems that have been fixed or by some sort of new machine experience (the initial Spotlight index?) or by Miranda playing lots of MySims, but whatever was going on, I’m happy now. And in general the bug level has gone down to an acceptable level; not quite as low as I would like, but low enough, and the trajectory is in the right direction.

Touching on the “role of the laptop” note above, I’m starting to rethink what my machine mix should be. For the last few years, one decently powerful laptop plus an iMac with more storage (e.g. with my music library on the latter) has been the right choice for us, but I’m not convinced that it will be the right choice once Miranda has gone off to college. It seems like one decently powerful machine is a better choice for me, which either means a laptop that can do everything plus a monitor for times when I want it or else an iMac plus a lower-powered laptop. Not sure; for now, I’ll stick with my current configuration, and I’ll buy a new iMac when they release them, presumably in a few months. (My iMac still has a spinning disk, and that really is slowing it down.)

At any rate, it’s nice to be using a new computer again; one advantage of only upgrading rarely is that the upgrades feel better when you do them.