- Tanuki testicle art.
- One day in kanban land.
- Pixie Driven Development.
- A plain-text version of the Declaration of Independence. (Via Kelley Eskridge.)
- Rock Band as a music theory teacher.
- Maira Kalman on Thomas Jefferson and Benjamin Franklin. (Via @bos31337 and The Edge of the American West.)
- The red handprints are a particularly nice touch.
- Another way to approach poems/stories.
- I never get tired of seeing new optical illusions.
- Glad to see I won’t be reverse-polish-deprived when/if my current HP calculator bites the dust.
-
(Via @garb.)
- Trompe l’oeil murals. (Via @scottmccloud.)
-
Japan still gets weirder games than we do. (Yes, the game apparently involves writing songs which are then sung in game by a polar bear.)
(Via @tinysubversions.)
- Glad the meme still has some life in it.
- I can’t wait for the Beatles game.
- The secret history of Ada Lovelace. (Via @elenielstorm.)
- Thoughts on the order in which to implement a video game. Interesting how many of the ones that don’t work seem compatible with agile. (Via @kateri_t.)
- This is going to be awesome. (Or, maybe, a complete disaster! I doubt it, though.)
- Nels chiming in further on the save game issue.
random links: august 30, 2009
August 30th, 2009
puzzle quest: galactrix
August 29th, 2009
I still haven’t made up my mind about Puzzle Quest: Galactrix. It never grabbed me in the same way as its predecessor; how much of that is due to novelty wearing off, how much is due to the strangely low quality of the DS port, and how much of that is due to the core mechanics?
I actually don’t think it’s the core mechanics, though it may be due to the peripheral mechanics. The non-fixed gravity, once I got used to it, does make the game slightly richer: in particular, any time you swap gems, you have the choice of two directions in which to carry out that swap (and to have pieces subsequently fill in from). That gives you something extra to think about, and meant that playing the game wasn’t exclusively an extension of the habits that I’d built up with its predecessor.
That’s the main combat mode; like its predecessor, there are another half-dozen or so variants of the gameplay, used in different situations. These took me a while to get used to—in particular, like Bill Harris, I didn’t initially appreciate the way that the leap gate mode punished you for setting off chains. (Incidentally, Bill has two other posts on the game; not so much blog discussion of it in general.)
Thinking about it a bit more, though, I eventually changed my mind on that issue. If all the mini-games had the same rules, there wouldn’t be much point of having mini-games; so there’s a real benefit in having what’s good in one mini-game be indifferent or even bad in another. And, on a more subtle point, the main attack game is probably the deepest of all the gameplay modes (which is a good thing!); the other games are typically more focused, and it turns out that the tricks that you learn to focus on from them can help broaden the range of your tactics in the main attack game. For example, a key part of the attack mode game play involves making sure that your moves don’t set up good attacks by your opponent. Controlling cascades is an important aspect of that; the rumor game mode focuses on that, and (as noted above) controlling cascades is also an important part of the warp gate game. Similarly, you have various items which will allow you to move twice in a row; the warp gate game helps you focus on setting up attacks one move out.
So it’s a nicely crafted set of games, one which adds up to more than the sum of its parts. Though they’re not all fabulous; in particular, the games that try to get you to use up most or all of a fixed set of gems didn’t work too well for me, certainly not as well as the monster capture game in Challenge of the Warlords.
The upshot of all of that is that I support the game’s primary mechanics, and some of its peripheral mechanics. Other parts of the peripheral mechanics, though, didn’t work as well for me. In the game’s predecessor, you had a leveling up system that was most important mechanism for acquiring new abilities (not just buffing your stats), which culminated (when you reach the level cap) in a spell that let you largely take control of battles; the game controlled the levels/abilities of enemies that you fought to generally give you well-matched battles. In Galactrix, however, abilities are controlled by buying items for your ships (and you may need to get better ships to have more slots for items). So you have access to abilities earlier, and it’s harder for the game to match your level.
The result is that, on the one hand, I had some early boss fights that were way too hard for me; on the other hand, about halfway through the game I had a ship that had enough slots for a set of items that enabled me to control the playing field whenever I needed to. So, while having more choices for customization sounded nice in theory, in practice it didn’t work out too well. (And, I will add, the way it turned out is a bit of a blessing in disguise: if I’d had to do more customization, I would have had to spend more time grinding on areas of the game that I didn’t particularly enjoy.)
Some interesting seeds here, but I’m still left with an unsettled feel for the game. Coming in, I wasn’t sure if the original Puzzle Quest was a one-trick pony; having played the second game, I’m still not sure! Or rather, it’s clearly not a one-trick pony, but it might be a one-and-a-half trick pony; on the other hand, it wouldn’t completely shock me if the next iteration put it all together in a satisfying way.
saving, ethics, and the slog
August 2nd, 2009
There’s been a lot of discussion recently about choices in games, and the effect that game save mechanisms have on the ethical impact of those choices. I won’t even attempt to link to the vast majority of the conversation, but two contributions (both involving Nels Anderson) particularly struck me today: slides for a talk by Randy Smith called “How to Help Your Players Stop Saving All The Time” that Nels mentioned on twitter, and an Experience Points Podcast episode on “The Decision Dilemma”.
I’m an obsessive saver when I play games (though, fortunately, these days less obsessive a reloader than I used to be), but listening to Jorge, Scott, and Nels talk on the latter made me realize that many people save games for completely different reasons than I do. The typical scenario that they discussed is a player who saves a game right before a big choice in a game (typically a moral one) and then plays through the different branches, reloading as necessary, before deciding which route to commit to.
The thought of doing that almost never crosses my mind. (Especially if the choice is a moral one.) And when it does, I reject it out of hand. For example, when playing Mass Effect, I wasn’t really thinking too hard when going through the dialogue tree that leads to a choice of which party member dies. I ended up inadvertently choosing to save the party member I liked less; once I realized that, I could have reloaded and not lost much time, but instead I felt a pang of regret and continued playing. (Though, to be sure, I’m not sure I made the “wrong” choice even there—it struck me as the sort of choice that, in the real world, I would want to not make based on personal likes and dislikes, and it’s not clear to me that other factors wouldn’t have swayed me to make the choice I actually made in game.)
Instead, the reasons why I save are quite different: I save because I don’t want to spend time doing stuff that I don’t enjoy. I do not want to have to fight through a stretch of the game, to die, and to have to replay that section. (Unless, of course, it’s a game whose mechanics I’m particularly fond of.) Perhaps worse, I do not want to survive the next section of the game but end up in a weakened state, making battles half an hour later much more difficult. (And probably requiring extra reloading when I reach them!) And, of course, the absolute worst is when I survive by avoiding encounters that would otherwise have given me experience points, forcing me to repeat battles (or grind in order to level up) for the entire rest of the game.
The above mostly plays out in tactical situations. If the outcome of a battle went well enough and it’s easy enough and fast enough to save, then I will typically save; if it went badly enough, I will typically reload; and if it’s in the middle, I’ll play along for a while longer. As I said above, the more strategic choices are much less likely to make me think seriously about reloading; and, even when I’m nervous about a choice, it’s not usually an ethical choice, it’s much more likely to be a choice about which branch of a skill tree to improve my character in.
So the podcast episode was, to me, more a glimpse into other people’s minds than anything that spoke to me directly. Randy Smith’s slides, however, were a different matter—indeed, right near the beginning he talks about frequent saves being driven by a need for safety, which is a good match for my feelings. (Though he branches out into other motivations later in the slide deck.)
I wish I’d heard him actually give the talk; I’m having a hard time grasping the nuances just from the slides. He ends the first part of the talk with a claim that “reducing compulsion is good, regardless of save/load design”, which I tend to agree with—in particular, I don’t claim that my obsessive saving is a good thing, in fact I’m willing to accept that it’s a bad thing. (E.g. because, as he says, it takes my attention outside the game.) I also agree with him that cheap save/load sets up a feedback loop encouraging people to do so more often. (I played Doom rather differently from Marathon, for example.)
It’s not clear to me, however, that I prefer for games to solve this problem by limiting the contexts in which you could save: I’d much prefer to solve the problem by limiting the lack of safety that drives me to save in the first place. (Though I’m also willing to believe that this is a false choice, and that a deeper analysis would lead to a more satisfying resolution. In fact, I’m willing to believe that, if I’d been at Randy’s talk, I would understand him as doing exactly that sort of deeper analysis!)
Consider the basic choices that I outlined above: if I don’t save and then die, or if I do save but don’t reload after a stretch in which I played badly, then I will get punished for my actions, with that punishment lasting in some cases for the entire rest of the game. In other words: if I play badly (or even less than perfectly), the game will reduce my enjoyment of the game for hours to come.
This is a lousy way to treat players. If the game really is about challenging the player’s skills, then of course you want bad play to have consequences; such a game, however, then it should hedge its bets in two ways, both by putting skill-driven play front and center and by having individual bouts be bounded with no lasting in-game consequences from one bout to the next. (E.g. puzzle games, rhythm games, fighting games, multiplayer FPS games.) But if you want your game to have a long-term flow, then don’t treat your players this way.
So: at the very least, bound the negative consequences. There are lots of tactics for doing this; checkpoints are a tried-and-true one, but I also rather like the Zelda technique of both having the game be kind enough that death is relatively rare and having the consequences of death be limited to needing to refill your hearts / bombs / arrows (all of which are available from clumps of grass) and perhaps needing to traverse a part of a dungeon. (Usually a small part: in particular, Zelda dungeons are generally good about giving you a shortcut from the entrance to the boss fight once you’ve gone through the rest of the dungeon once.) (Incidentally, I think part of people’s dissatisfaction with Majora’s Mask is in the ways in which this principle doesn’t hold, or at least doesn’t manifest itself in the same fashion as it does in other games in the series.)
I also really enjoyed the way Lego Star Wars handled this issue: in that game, your character has almost no state at all, which means that the game can simply respawn you when you die. A skill-based player can still take pride in rarely dying when progressing through a level; other people can have no end of fun by simply mashing buttons.
Another issue around saving and loading is the lack of information: most of the time (pre-boss save points being an exception), I save not because I know I’m likely to die in the upcoming area or because I’m likely to play in a sub-optimal manner, but rather because I want to limit my losses in the face of an uncertain probability of death. And, perhaps more interestingly, the reason why I reload isn’t that I know that I played sub-optimally in a fashion that will hurt me down the road, it’s because I know that I played sub-optimally and I don’t know what the consequences of that will be.
If you treat this simply as an information problem, it can be significantly improved without harming gameplay. Start with the reload problem: that shows up most starkly if the game doesn’t put any limits on your capabilities (e.g. your health, your ammo supply, the level of your character). In that situation, if you do anything suboptimal (e.g. miss a single shot!), you may fear that it will hurt you going forward.
If, however, you have caps on these attributes, this problem goes away. For example, if there’s a maximum amount of ammo that you can hold, then if it takes you three shots to kill an enemy whom you could have killed with two shots, and if you subsequently pick up enough ammo that you’re at the ammo limit even after wasting that shot, then you know that the missed shot didn’t hurt you. Concretely, my worry level in Deus Ex declined notably once I started hitting limits of this sort.
These sorts of attribute caps are most effective in directly attacking the problem of when to reload, but they also help with the problem of needing to save in the first place. Your character’s status with respect to various attribute caps give you a concrete way of measuring how vulnerable your character is; assuming that the game has earned your trust that it won’t throw major challenges at you without some advance warning, this will frequently allow you to avoid saving without seriously worrying that doing so will hurt you.
I’d love to see more games with significant moral choices. And I’d be delighted to have not saving be a part of that, as long as that doesn’t destroy my enjoyment of the game in a more mundane aspect.
twenty years of beard
July 25th, 2009
As far as I can tell, the last time I shaved was twenty years ago yesterday. If you’re curious what two decades of beard looks like, here’s a recentish picture of me:
That’s actually from three and a half years ago (I don’t take pictures very often, and appear in them less), but while I have somewhat less hair on top now than when that picture was taken, my beard looks about the same. (White hairs have started to appear in the interim, but they’re still very much in the minority.)
As you’ll note, my beard does not in fact extend to ZZ Top proportions. It’s fairly curly (much more so than the hair on the top of my head), so it extends down to my collarbone if you stretch it out. The hairs are still growing, but hairs fall out if I run my hand through my beard, and that’s the equilibrium point between those two forces.
My moustache in particular doesn’t look like it’s been growing for 20 years. My moustache hair grows notably more slowly than my beard hair; in fact, for the first year or two of growing the beard, the moustache hair and the beard hair didn’t meet around the corners of my mouth.
As to why I started growing the beard: shaving was a pain, both literally and metaphorically. So I stopped, and haven’t seen any reason to restart. It may not be to everybody’s taste, but I’m happy with it, as is Liesl, and that’s really all that matters to me. And I’ve worked in professions in which my beard is only mildly eccentric, if that, so there really aren’t any social pressures for me to shave.
I also don’t cut the hair on the top of my hair regularly, and in fact I haven’t been cutting it regularly it for one year longer than the beard. That hair has gotten cut on two or three occasions over the intervening decades, however, unlike my beard. In general, I’m less attached to having long hair on top of my head (or, indeed, any hair on top of my head) than I am to my beard, but I can’t imagine getting back in the habit of cutting it regularly, either.
galison, strands of practice, and trading zones
July 20th, 2009
The last chapter of Galison’s Image & Logic is about the relationship between (breaks in) different strands of practice within physics. If you treat the notion of paradigms sufficiently seriously, you’re led to think that theoretical breaks and experimental breaks come hand in hand: the two sides of a paradigm shift are incommensurable, so the change in the theoretical viewpoint also means that experimentalists on either side of the break can’t really talk to each other, because they’re referring to different objects, different concepts, even if they use the same words.
Which Galison takes issue with, both for conceptual and historical reasons. As he says, “When a radically new theory is introduced, we would expect experimenters to use their best-established instruments, not their unproven ones.” (p. 799) And indeed, as he discusses on pp. 811–812, when theorists were fighting over the nature of space and time, they took great care to translate their theories into terms that could be tested by the experimentalists of the time; different paradigms fought, one of them (special relativity) won, but the results were agreed to by all parties, there was no incommensurability that the notion of a paradigm shift might suggest.
So, rather than breaking at the same time, the experimental practices and theoretical practices underwent changes at different times. In fact, Galison introduces a third strand here, namely instrumentalists, with its own pattern of breaks, and several other practices (electrical engineers, the military)make a showing at various points in the book as well.
And the fact that breaks occur at different times in different strands, Galison claims, is a source of strength. One analogy to think of here is a brick wall: when you line up bricks, you want them overlapping rather than sitting directly on top of each other. That way, the weak points of one row are supported by the strong points of adjacent rows.
So: what does this have to do with agile? The first strands with breaks that come to mind are the TDD cycle. You don’t simultaneously write new code and new tests: instead, you write the test first, giving a break in the testing strand (manifesting itself as a red bar), and subsequently advance in the implementation strand (manifesting itself as that red bar changing to green). And then, of course, you refactor; I’m not sure yet if this is a break in a third strand or if it’s a further advance in the implementation strand. (For that matter, the refactoring can be an advance in the testing strand, as well.)
One special aspect of this example: while the strands don’t have their breaks at the same time, they have their breaks in close sequence, in a specific order. Is this a general property of best practice in interwoven traditions? I tend to think not; having said that, it doesn’t seem all that unnatural to me for breaks in one tradition to be followed reasonably closely by breaks in closely related traditions. So perhaps if you measure this with a sufficiently coarse granularity, these breaks look simultaneous, giving rise to the paradigm shift idea; I’m not sure.
Another example from the agile realm: iterations. Here we have breaks in the implementation, in the testing, and in the customer requests. And they’re all supposed to happen at the same time! Which looks dubious from Galison’s point of view; does that mean that Galison is wrong, that iterations are a bad idea, that I’m misreading him or stretching his analogy, or that these breaks aren’t in fact simultaneous?
It certainly seems likely that I’m stretching Galison’s analogy; having said that, I think you can also make a case that these breaks aren’t simultaneous. It’s not the case that Customers approach the planning meeting that kicks off an iteration with a blank mind: they’ve been thinking about what’s most important to work on next, and while they’ll certainly use feedback during the planning meeting to inform the details of what the team should do in the next iteration, there’s not a split in the Customer practice right before the planning meeting. And there isn’t one right after the planning meeting, either: the Customer has to spend a fair amount of time at the start of the iteration helping the rest of the team understand what that iteration’s stories means. And breaks in the testing and implementation strands don’t happen simultaneously in this example any more than they do in the TDD example.
This last case, in fact, brings us to the second point of Galison’s chapter. The chapter is titled “The Trading Zone: Coordinating Action and Belief”, and he claims that these “adjacent” strands can’t naturally talk to each other without misunderstandings. Instead, members of different strands have to work quite hard to find a way to work together that allows the two strands to learn from each other, to find a common way forward that advances both of their interests. (C.f. Star and Griesemer’s notion of boundary objects, which Galison comments favorably on in a note on page 47.) To do this, the parties develop pidgins or creoles; these languages aren’t enough to allow complete understanding between the two sides, but they are enough to let both sides agree on some amount of focused exchange.
I particularly enjoyed the example that Galison gave on pp. 820–827 of a pidgin language developed during World War II to allow theoretical physicists and electrical engineers to discuss the construction of radar and microwave devices using circuit diagrams. Returning to our programming examples: while, in the TDD case, there’s relatively little scope for misunderstanding (since the same people are doing the testing and the implementing!), we can nonetheless see unit tests as a pidgin language (or perhaps more of a creole) in this case. In fact, maybe that’s exactly the strength of unit testing: forcing a creole language into the situation sets up an explicit trading zone where one would have only been latent without that language, and in doing so it makes you aware of the split betwen the latent testing and implementing strands, increasing the strength of your work. The example of Customers, testers, and implementers working together is more clear-cut: agile suggests that the three groups spend quite a bit of time talking together, and acceptance tests give an example of a pidgin language that they can use to coordinate their activities.
And, as with the second agile example, Galison suggests reinforcing these trading zones with a shared physical space, to increase the chances that active trading will happen. The physical layout of the MIT Radiation Lab was designed to increase the amount of chatter between different groups; he gives examples of areas in later buildings designed to support particle physics research that are intended to increase the chances that members from different specialties will spend time together.
Though one aspect of agile practice that Galison’s text, to me, doesn’t clearly support is an erasing of boundaries: Galison seems happy to have these specialties to remain largely distinct, whereas the agile ideal is the concept of generalizing specialist. Or at least that’s the agile idea in the context of implementation; agile draws a particularly bright boundary between the business and implementation sides. (Though the lean tradition prefers to create an explicit bridge there in the person of the Chief Engineer.) Galison’s book is full of examples of fertile cross-pollination between disciplines, and even of individuals moving between disciplines (from meteorology to particle physics!), but the disciplines nonetheless retain their own individual character.
What should agile learn from this latter difference? I can think of two arguments in favor of breaking down such boundaries in the agile tradition: one is that it increases knowledge sharing (and the fertilization that results), and the other is that it increases resource flexibility. Galison certainly agrees with the former, but, as we’ve seen above, provides other mechanisms by which it can occur. He doesn’t, as far as I’m aware, address the latter; certainly something for me to think about in the future.
It’s an excellent book. I’ve only discussed the last chapter here, but I really enjoyed the more historical sections that preceded it. Great stories, great pictures, I found something new and interesting in every section.
sid meier’s alpha centauri
July 13th, 2009
The Vintage Game Club’s sixth game was Sid Meier’s Alpha Centauri. About which I don’t have much to say, but I’m in the habit of blogging here when I finish a game, so:
It’s a Civilization-style game. Some of my friends praised it quite a bit, but I’m not seeing that: it’s in the lineage of a series that I enjoy and respect quite a bit, but no more than that for me. I’d heard claims that its narrative set it apart from other games in the genre; there wasn’t enough narrative to make a difference to me. I’d also heard claims that the differences between factions set it apart from other games in the genre; I enjoyed not having to worry about unit types when playing as the Gaians, but nonetheless: not enough to make a difference to me.
Still, it’s a genre that I like, and it’s well executed once I got used to a user interface from a decade ago. In fact my basic problem with the game and the genre isn’t that I don’t like it: on the contrary, it’s a genre that I get far too addicted to, that I find myself staying up far too late playing. Though even that isn’t entirely due to my liking the gameplay so much as that the gameplay doesn’t have natural stopping points: there’s never (well, rarely) a feeling that you’ve accomplished something and that you want to take a break now to savor it, instead you always feel the pull of “click this, upgrade that, move the other”.
Fortunately, I rather enjoy that constant clicking, so I’m happy enough to keep playing such games indefinitely. And I really like the idea of building, both at a city and a nation level. (It’s probably just luck, but the world maps that I was given in my games were well suited to pleasant growth with natural chokepoints for battles.)
But, when I got bored with exceedingly-easy difficulty settings, I was happy enough to give the game a rest rather than continue on at harder difficulties. At least part of what’s going on there is hidden information that plays out over long time periods. I don’t particularly enjoy hidden information in general (e.g. my least favorite Advance Wars levels were those with Fog of War), but here it’s particularly bad in that, if you make a mistake in your production strategies, you generally don’t find out about it until an hour later and long after you can do anything about it. And even when you do find that you’ve made a mistake, it’s not clear exactly what you should have done differently.
So I was happy enough to stop playing after three weeks. But I made it through (and enjoyed) three playthroughs of the game during those three weeks; who knows, maybe if I had different constraints on my time, I’d still be playing it and be happily delving into the strategy.
explaining my choices
July 6th, 2009
I periodically encounter discussions of why people play games (most recently in A Life Well Wasted), and I’ve been getting more and more allergic to such talk. The main reason is that it almost always comes in the form of claims that “we play games to have fun” (with a strong implication that anybody who thinks otherwise must be deluded), a polemic that I disagree with rather strongly.
As I’ve been thinking about it more, though, I’ve realized that there’s more to my unease than a philosophical distate: it turns out that I don’t have a very good answer myself to the question of why I play games! Do I play games for fun? For beauty? To learn something? For some other reason? It’s actually not at all clear to me.
And what makes this especially weird is that, even though I can’t explain why I play games, I am quite confident that I’m not playing games just out of inertia. Over the last few years, I’ve been getting more and more conscious in my choices of how I spend my time. And I’ve chosen over and over again to continue to make time to play games, even though I have enough time pressure that it would be very easy for me to stop doing so and fill up that time with other activities that I would also find very rewarding.
I’m not even doing this out of a sort of inertia once removed, e.g. because games are an entry (a few entries, actually) in my GTD projects list. GTD is a way of structuring my life to increase the chance that I’ll be able to do what I most want to do at any given moment, not something that I follow indefinitely on autopilot. Every week, I have to ask myself “is playing games really part of what I want to be doing?” And, so far, the answer has always come back “yes”. (With the occasional caveat.)
Part of the answer, I think, comes from my recent Christopher Alexander reading: he’s gotten me using the word “soul” in public, and asking myself how I feel at a fundamental level about various choices. With that in mind, it may be that the question of “why do I do X?” (for broad questions X) is becoming, to a larger and larger extent, irrelevant to me: on the one hand, perhaps I’m getting better at telling which broad choices feel right to me, and then using techniques like GTD to have me spend as much time as possible actually doing that.
But, though I’m sure there’s some truth to that, it’s not all of the answer. In particular, it’s also true that both of the influences I’ve mentioned here, GTD and Alexander, have analytical components that I’m not actively using. GTD has its horizons of focus (which I should consider taking more seriously at some point); Alexander has his characteristics of living structures. So it’s entirely possible that, if I were to apply similar techniques here, I’d be able to figure out better what makes those parts of my brain tick.
Indeed, it’s possible that I’m being somewhat disengenous by writing this post—I have, in fact, been known to spend time thinking in public about various choices that I’m making. But I’m not being completely disingenuous: I really don’t have a great explanation for why I play games (or program, or read), but at the same time that lack of an explanation isn’t giving me the slightest pause that I might be spending my time in ways that aren’t good for me.
Who knows. I suppose the most likely explanation for my lack of worries in those areas is that I’m turning into a fundamentalist, or indeed have long since done so…
vgc game 7: majora’s mask
July 3rd, 2009
I’m pleased to say that the Vintage Game club has chosen The Legend of Zelda: Majora’s Mask as its seventh game. The discussion will probably begin on Friday, June 10th; it’s a wonderful game, and one that I suspect has quite a lot to teach me; please come join us if you have any interest in playing the most notable black sheep in the Zelda series.
the perils of particle physics
July 1st, 2009
If you are considering building an experimental apparatus filled with liquid hydrogen, you might want to keep the following incident in mind:
Deep within the bubble chamber, the inner beryllium window had shattered along a microscopic imperfection in its surface. Splintering outward, the inner window fragments blasted open the outer beryllium window accompanied by the pressure wave of the expanding hydrogen. Within half a second, the laboratory floor was bathed with some 400 liters of turbulent, burning hydrogen. Ignited when the outer window failed, the fire burned wherever the hydrogen and air were mixed. Seconds later, a fierce explosion ripped through the laboratory, strong enough to blow the 31,000 square foot laboratory roof 10 feet into the air. As it crashed back down, roof material cascaded onto the floor and began to burn, raining down hot tar. Now other areas erupted in flames as the soft soldered joints melted in the tubes that linked large quantities of liquid petroleum gas, as well as other combustibles. (Galison, Image & Logic, pp. 356–357.)
Fortunately, it was shortly after 3am, so not many people were around, and only one person died. The most dramatic survival:
One graduate student had managed to crawl into a space between the bubble chamber electronics room and the south wall. Unable to escape further because of his injuries, he remained there until the fire seemed to be closing in. Radioing an ambulance to the east exit, the deputy fire chief, an engineer, a cryogenics expert, and some firemen hacked their way to him and brought him out on a stretcher. (p. 359)
And the end of one eyewitness report:
“I did not consider 80 PSI as extremely serious at that instant since all the peripheral systems are capable of easily handling such a pressure. At this point I turned to check the pressure in the Bubble Chamber to make sure that it was not rising excessively. I never did see the Bubble Chamber pressure gauge.” (p. 356)
a taxonomy of boundary objects
June 26th, 2009
The original paper on boundary objects gives a partial taxonomy of boundary objects; given my earlier thought experiment, I thought I’d see if I could find programming analogues to any parts of their classification.
Star and Griesemer’s first type of boundary objects are Repositories:
These are ordered ‘piles’ of objects which are indexed in a standardized fashion. Repositories are built to deal with problems of heterogeneity caused by differences in unit of analysis. An example of a repository is a library or museum. It has the advantage of modularity. People from different worlds can use or borrow from the ‘pile’ for their own purposes without having directly to negotiate differences in purpose.
At first, I thought this type was kind of banal, corresponding perhaps to collection objects in software, but now I think it’s more interesting than that. Reading their description more closely, I don’t get a collection object vibe: collections in the programs that I write usually contain a quite uniform group of objects, and those objects are used for one or two specific purposes; the above, however, emphasizes heterogeneity and differences in purpose.
That last sentence, in particular, reminds me of mashups; if you combine that with standardized indexing, I’m getting a very strong RESTful vibe from this. In a RESTful application, names are key but internal structure can vary from location to location, and outsiders can use standard tools to access the data that the application exposes and borrow it for their ends.
Next in their taxonomy is the Ideal Type:
This is an object such as a diagram, atlas or other description which in fact does not accurately describe the details of any one locality or thing. It is abstracted from all domains, and may be fairly vague. However, it is adaptable to a local site precisely because it is fairly vague; it serves as a means of communicating and cooperating symbolically—a ‘good enough’ road map for all parties. An example of an ideal type is the species. This is a concept which in fact described no specimen, which incorporated both concrete and theoretical data and which served as a means of communicating across both worlds. Ideal types arise with differences in degree of abstraction. They result in the deletion of local contingencies from the common object and have the advantage of adaptability.
My first reaction here was to try to make an analogy with abstract types; indeed, they use the word “abstracted” in their second sentence, and I can easily see their species example being used as an example in an OO textbook. The only thing that gives me some amount of pause when proposing this analogy is their use of it as a talisman communication tool between multiple parties, many of which may turn out to want to know more about the details of the objects in question.
In contrast, in my programming experience, if I have a concrete subclass of an abstract class, it’s more typical for almost all users to only care about the abstraction, while perhaps only one user cares about the concrete class. Though, rereading what they say, maybe their species example suggests that the correct analogy to the Ideal Type is to (any sort of) class, with the non-ideal objects being instances, rather than subclasses? Either way, though, I get the feel that I’m missing something in the way that I was missing something with my earlier analogy between a Repository and a collection: is there another analogy waiting to be found here that’s a bit grubbier in the way that the RESTful example is?
Third up are Coincident Boundaries:
These are common objects which have the same boundaries but different internal contents. They arise in the presence of different means of aggregating data and when work is distributed over a large-scale geographic area. The result is that work in different sites and with different perspectives can be conducted autonomously while cooperating parties share a common referent. The advantage is the resolution of different goals. An example of coincident boundaries is the creation of the state of California itself as a boundary object for workers at the museum. The maps of California created by the amateur collectors and the conservationists resembled traditional roadmaps familiar to us all, and emphasized campsites, trails and places to collect. The maps created by the professional biologists, however, shared the same outline of the state (with the same geo-political boundaries), but were filled in with a highly abstract, ecologically-based series of shaded areas representing ‘life zones’, an ecological concept.
This one was hard for me to grapple with. (And I’m not the only one; that link, incidentally, gives references to extensions of this taxonomy.) Even in the physical world, it’s a bit hard for me to tell examples of this: is the concept really restricted to large-scale geographic areas? That seems a bit limiting.
The paper in question discusses the Museum of Vertebrate Zoology at the University of California, Berkeley; is it a boundary object? I tend to think so: quoting from the definition on page 393,
This is an analytic concept of those scientific objects which both inhabit several intersecting social worlds (see the list of examples in the previous section) and satisfy the informational requirements of each of them. Boundary objects are objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites. They are weakly structured in common use, and become strongly structured in individual-site use. These objects may be abstract or concrete. They have different meanings in different social worlds but their structure is common enough to more than one world to make them recognizable, a means of translation.
And certainly the MVZ is robust enough to maintain a common identity, but it means something different to a postdoc working there, to somebody who has spent her career there, to a visiting researcher, to a university administrator, to an outside funder, to a janitor. (Indeed, much of the paper is devoted to showing such differences in meanings.) Given that, I would treat the MVZ as a Coincident Boundary: though not spread over a large-scale geographic area, it’s still a place which means different but related things to different people.
Which, to be honest, doesn’t help me directly with finding programming analogies; maybe a function body in the input to a compiler that means different things to the lexer, the parser, the optimizers, the code generator, the debug info generator? Actually, I think maybe the more important analogy is a bit more conceptual and not internal to programming: maybe we could think of a domain object as an example of a Coincident Boundary that means one thing to a programmer, another thing to a database administrator, a third thing to an system architect, a fourth thing to an XP Customer, a fifth thing to a marketer, a sixth thing to an end user. I’m not completely sold on that, but I do think that domain objects are boundary objects of some sort, and they’re a better fit to Coincident Boundaries than anything else in Star and Griesemer’s taxonomy.
The last entry in their taxonomy is Standardized Forms:
These are boundary objects devised as methods of common communication across dispersed work groups. Because the natural history work took place at highly distributed sites by a number of different people, standardized methods were essential, as discussed above. In the case of the amateur collectors, they were provided with a form to fill out when they obtained an animal, standardized in the information it collected. The results of this type of boundary object are standardized indexes and what Latour would call ‘immutable mobiles’ (objects which can be transported over a long distance and convey unchanging information). The advantages of such objects are that local uncertainties (for instance, in the collecting of animal species) are deleted.
Class interfaces (whether concrete or abstract) are examples here, as are generalizations such as duck types or the sorts of dependencies that C++ templates impose on their parameter types. For example, templates don’t care about the details of an iterator as long as it exposes its increment operator under the name ++, its equality operator under the name ==, and so forth.
Network protocols are another example: indeed, what is better than TCP/IP at ensuring that data “can be transported over a long distance and convey unchanging information”? Stick it on the wire in a Standardized Form, and it will come back out the other end. We can use our RESTful example from above in this context as well: if you want disparate clients to all be able to talk to each other, it helps a lot if everybody speaks in terms of GET, POST, PUT, and DELETE.
And, as we did with the last example of the taxonomy, we can step away from the code a bit. An acceptance test is a Standardized Form: if the Customer and the engineers want to agree on what it means to complete that task, it sure helps if they can point to a Standardized Form for specifying that completion, and an acceptance test that both sides can read (and run!) is an excellent form for that agreement to take, for deleting uncertainties.
Interesting stuff. I’m curious what other classes of boundary objects people have come up with, and I should probably spend more time thinking about examples outside of the strict domain of code. And I really like the lens it gives on the messiness, the grunge, the lack of sterility of the RESTful approach: if you pin down enough so that people can talk to each other while leaving enough of the details undetermined so that different groups can use the entity in question for significantly different ends, unexpected synergies can flourish.
boundary objects and solid principles
June 23rd, 2009
The following bit from Brian Marick’s summary of boundary objects caught my eye:
Ivermectin is a popular drug for deworming animals. Onchocerciasis (river blindness) is a chronic illness that’s a particular burden in sub-Saharan Africa. Since river blindness is caused by a worm susceptible to ivermectin, the manufacturer (Merck) desired to donate ivermectin to fight the disease. That presented some problems. For example, it would not be in Merck’s interest if the bulk recipients responsible for redistributing ivermectin to people instead resold it into the lucrative veterinary market. On the other hand, it would also not be in Merck’s interest to tell the recipients (including national governments that are markets for other Merck drugs) that they are not competent or trustworthy enough to receive ivermectin. Merck needed organizational distance.
The solution was for Merck to donate the drug to a non-profit non-governmental organization. An independent expert committee would make the decision about which applicants (both governments and non-governmental organizations) would then receive the drug. This committee is a boundary object. To Merck, it provides distance: Merck donates the drug, reaps the benefits in good will and tax deductions, but is insulated from political repercussions. To the bulk recipients, the committee is the dispassionate judge of applications, end-point of an application process, and advisor during implementation.
This situation and its solution immediately reminded me of the notion that “All problems in computer science can be solved by another level of indirection.” But it’s not just that broad aphorism: the example reminds me of Bob Martin’s SOLID principles in particular. (The first five principles listed here; see also this Hanselminutes show on the topic.) I don’t think all his principles apply to the drug example, but more than one does.
The most obvious example is the Single Responsibility Principle. Merck is a big company that does lots of things; to handle this problem, they created a separate organization that has only one job, to deal with dispensing ivermectin.
I’ve squinted at the the Open Closed Principle a few times, and I can’t see how it applies to this situation – the types of modification that the OCP is talking about don’t seem so relevant here.
The Liskov Substitution Principle also doesn’t seem particularly relevant. It’s about base classes and derived classes; the only example of that that I see here is that you could think of the abstract concept of an applicant as a base class, with concrete applicants as derived classes, but that’s a pretty weak LSP situation: the base class is so abstract, the derived classes are so concrete.
Skipping ahead to D, the base and derived classes are a much better fit for the Dependency Inversion Principle. In fact, the second paragraph quoted above is all about that principle: rather than having the concrete company Merck deal with concrete recipients of donations of the drug, we introduce multiple abstractions. From Merck’s point of view, the NGO is something of an abstraction: as long as Merck knows enough about the NGO to trust that they’ll do a reasonable job dispensing the drug, it doesn’t have to worry about the details of the process. Similarly, from the applicants’ point of view, the NGO is a relatively abstract organization compared to Merck. (Or is it? Am I conflating this with the Single Responsibility Principle? Certainly there are fewer opportunities for linkages between the applicants and the NGO than between the applicants and Merck as a whole.) From the NGO’s point of view, the very notion of “applicant” is an abstraction placed on real people, real organizations.
Going back to I, the Interface Segregation Principle also seems relevant, though admittedly my justification for it here seems very similar to my justification to the Single Responsibility Principle: the NGO is exactly the interface to Merck for clients who want free ivermectin.
Does this analogy hold up in other examples of boundary objects? Can we relate the Open Closed Principle and the Liskov Substitution Principle to examples outside of programming? Can we run our analogy in the other direction, finding properties of boundary objects that suggest principles of good programming?
random links: june 21, 2009
June 21st, 2009
- Some evidence for anybody curious how well being good at Rock Band drums transfers to real drums.
-
The neuroscience of illusion; I’ll embed one of the videos so you can see the kind of thing they’re discussing.
(Via Kelley Eskridge.)
- A pleasant network logic puzzle game. (Via User Friendly, which makes it essentially impossible to cite them correctly as a link referrer.)
- Miranda asked me to buy her a copy of Bertolozzi’s Bridge Music about five seconds after the podcast started, and that was before she knew that the sounds the piece uses were actually generated by banging on the Mid-Hudson Bridge.
- Little Wheel, a short adventure game. Pleasant enough, and a neat world, but there’s also some interesting design questions. In particular, should point-and-click adventure games always show you the current clickable objects in a given scene? (Via MTV Multiplayer.)
-
Speaking of adventure games, I will link to this Tim Schafer interview only because of the following quote:
My daughter, I think, is going to be very good at playing adventure games, even though she is only a year old. Because she’ll grab a toy and she’ll bang it on all of her other toys in the room. She’s trying every item in the room with every other item in the room to see if it does something. I was like, “You are a natural-born adventure game player.” She doesn’t steal objects and hide them somewhere on her body, though. In some secret orifice.
(Via @elenielstorm.)
-
And speaking of children trying things with other things, interesting testing in a sorting algorithm:
(Via @Vaguery.)
- More on the theme of kids: kids and Greek rhetoric (via @Adjuster) and Mad Scientist’s Alphabet Blocks (via @garb.)
-
Just because I like Flanders and Swann, a lego take on The Gasman Cometh.
(Via @kateri_t.)
- How culture and silos harm design. (Via @testobsessed.)
- Remarkable cloud pictures. (Via @marick.)
-
Two TDD tweets: @KentBeck says:
francis bacon understood tdd: “truth emerges more readily from error than confusion”
and @mfeathers says:
TDD is a way of teaching software how to live at the scale of human understanding.
- Will universities go the way of newspapers? (Via @marick.)
-
(Via @SimonParkin.)
-
I don’t normally link to ads / corporate videos, but this one is delightful. Who is this Takashi Murakami guy?
(Via @Iroqu0isP1iskin.)
margaret robertson on (no) story
June 14th, 2009
One of the most interesting of the talks I attended at this year’s GDC was Margeret Robertson’ talk Stop Wasting My Time and Your Money: Why Your Game Doesn’t Need a Story to be a Hit. Unfortunately, I didn’t take any notes while listening to it, so my summary at the time was exceedingly sparse. But it keeps on coming into my head at odd moments, so I figure I’ll give another shot at providing notes, almost three months later.
Fortunately, she’s posted the slides for her talk. So go open them up it another window, and I’ll see if I, by using them as a jog to my memory, can add some modest value to them.
She begins by talking about why traditional big stories in games are problematic. For one thing, they’re extremely expensive. For another thing, the notion that people follow the story through is false: most people playing through games don’t make it to the end. For example, Half-Life 2: Episode 2 is a short game (averaging less than seven hours to complete), yet less than half of its players made it through to the end. (At least I think so – why do so many of the graphs on that page have the 0 mark floating above the bottom?) Given this, perhaps there are ways to get more bang for your development buck than traditional big stories?
Which isn’t to say that she doesn’t like stories. In fact, she lists some benefits that stories provide: Motivation, Entertainment, Communication, Metaphor. Though, she says, games don’t need to get their emotion from stories. I wish I could remember more of what was going on with that slide and the five that follow it; I suspect that the Paul Klee painting that follows is part of that sequence, showing something that gives rise to emotions without any story to be found. I could be wrong, though.
Anyways, back to the good side of stories: fortunately, stories can deliver those benefits while being tiny, instead of horribly expensive! She gives some examples from outside the genre: an art installation that consists only of a crack (and whose story lingers, is indeed perhaps strengthened, after it is filled in); and the Hemingway six-word short story “For sale: baby shoes, never worn”.
And then she goes into game examples of small stories. Which is where she totally won me over, by bringing up Majora’s Mask: the main story in that game is all well and good, but what still haunts me in that game is the same story that haunts Margaret Robertson, namely the love story of Kafei and Anju. (About which, two side notes: 1) Margaret Robertson returned to the game in an Offworld column. 2) The next VGC game will be a Zelda; discussion will start Friday-ish, so if you want company while giving Majora’s Mask another look, or even a first look, come and convince other people to vote for it!)
She then talks about where you can tell a story: aside from plain old exposition, you can do so in the set-up, externally, subtextually, in the environment, and in gameplay. And you can tell it through the HUD, art, animation, sound, text, voice-over, and/or video. (Giving examples of each, many of which are much more subtle than her Majora’s Mask example.) That last list is, I think, supposed to be ordered from least obviously story-related to most; she also has a slide with an arrow going the other direction, and I wish I could remember what that was about; maybe they’re most effective in the opposite order, or something?
Then, a gameplay challenge: try to, as economically as possible:
- Communicate that time has passed since you were last in this world.
- Communicate that you are now famous.
- Communicate that you are the good guys.
- Communicate that your army is low on resources.
- Communicate that your army is dogged and determined.
- Communicate gameplay hints.
Her answer: you can do it with one letter (or one texture), namely the orange lambda symbols from Half-Life 2. Which is a big win over elaborate, expensive cut scenes.
And finally, a reminder that we’re talking about games, not films. What people care about in games, in descending order:
- Where I am.
- What I can do.
- What I look like.
- Who I am.
Leaving us with the recommendation: “Can you imagine it in a film? Dump it. Easier on [game developers], better for the player.”
So: that’s her talk. Which was awesome, but that alone isn’t enough for me to want to write up notes on it after so much time has passed. The reason why I’m doing so is that I’ve finished two games since GDC: Chrono Trigger and Flower. The former has a big, traditional video game story; I didn’t care about that story at all. What I did care about was a child in one of the houses saying that she hated her father; I was in pain when I read that, I was hugely relieved when that finally got resolved hours later, and it’s a perfect example of one of her small-scale stories. And the latter game doesn’t have any sort of traditional story at all: far from being a problem, the game got me thinking as much as any other game has in recent memory, among other things trying to figure out what sort of story I should read into it. And that’s just the games I’ve finished—the game I’ve played most this year is Rock Band 2, which has a very bare-bones story indeed. (But, I will add, a very effective one: in particular, for a certain type of player, it provides an extremely important form of motivation and structuring for your play.)
Which isn’t to say that I’m against stories, either: for example, I quite enjoyed Mass Effect, and am very much looking forward to its sequel. Though even that example gives me pause: I remember enjoying its story at the time, but its story hasn’t particularly stuck with me in retrospect.
In all seriousness, it’s possible that this talk will be a turning point in what I see in games. I simply don’t know if I would have seen Chrono Trigger and Flower in the same way had I not attended this talk; I also don’t know what other storylets I’ve failed to appreciate in other games I’ve played over the years. (Which doesn’t mean that the stories didn’t have an effect, just that I wasn’t conscious of the construction of their effect.) Maybe those two games are exceptional (actually, I’m quite sure both are, though not only for this reason), but maybe there’s a wealth of stories waiting for me if I just open my eyes a bit.
change of focus
June 2nd, 2009
Over the last few weeks, I’ve been finding enough unusual projects imposing on my time that I think I’m going to have to shuffle my priorities, albeit temporarily. I’ve been wanting to do more programming at home than normal recently: aside from improving the memory project, I want to spend a bit of time getting back into functional programming. And then there’s conference preparation work on top of that.
Being a good GTD devotee (or a good lean/agile devotee), this means that something has to go. Fortunately, I’m actually pretty well on top of things right now—in particular, my Next Action list is about as short as I can ever remember its being—so I shouldn’t have to prune too much; but I have to prune something. And I’m certainly not going to take a break from learning Japanese—in fact, one of the unintended consequences of the memory project has been to make there be pretty serious consequences if I take even a couple of days off from my study. (One could make a sensible case that I am being a total idiot in subscribing to ChineseClass101 right now, however. Though I certainly don’t intend to treat that as seriously as I’m treating learning Japanese.)
So I think my only choice is to cut down on my video game playing for the time being. Don’t get me wrong: I’m not going to stop completely, you’ll still find me every Thursday evening at the VGHVI play nights, and I’ll keep up with VGC activities. I imagine I’ll do some playing and blogging outside of that, too, but for those of you who read this blog for game-related content, don’t be surprised if there are relatively slim pickings here for a while.
But don’t unsubscribe, either! In particular, my conference activities won’t continue forever, so by the fall I should be back to normal. Heck, I might even be back to normal after Agile 2009—I certainly want to find time in early September to play a certain game.
come play games with us!
May 30th, 2009
I normally stay away from online video game play, both because of a lack of time and because of the bad things I’ve heard about strangers’ behavior on Xbox Live. For the last six months, though, I’ve been meeting up every Thursday thursday to play games with people from the Video Games and Human Values Initiative, and I’ve been having a great time!
I encourage any of my blog readers to join us. We meet every Thursday at 6:30 p.m. Pacific / 9:30 p.m. Eastern time; send a friends request to “a VGHVI” to join in. We try to balance playing a mixture of games (and game styles) with returning to favorites fairly frequently; you can see a list of games we’ve played at the bottom of this VGHVI wiki page, but we’ve played Burnout Paradise and Left 4 Dead several times; Castle Crashers, Rock Band 2, and Carcasonne a couple of times each (with another round of Rock Band 2 on tap next); and a couple other games one time each; we’re open to further suggestions as well.
So far we’ve largely been doing this on Xbox Live, though apparently a Lord of the Rings Online group just started up as well. (I think they’re on Sundays, but I’m not sure.) Please join us if you’re looking for fun games and pleasant company!
update on learning japanese and memorization
May 29th, 2009
It’s been ages since I blogged about learning Japanese, so I figured I’d give y’all an update. I finished the textbook I was using last November, which raised the question of what to do next. I have some manga around and even a couple of collections of essays/stories, but I wasn’t sure I’d be up for them just yet. So, on a friend’s suggestion, I subscribed to a series of children’s books! The friend in question is an American with a Japanese wife, and they subscribed to the books for their kids; based on his description, they sounded delightful, and I’m certainly not too proud to read books targeted at two-year-olds.
Actually, I subscribed to several of the company’s series: I was pretty sure that the lowest level they offered was too basic for me, but the next five levels (going from 2-year-old through 6-year-old) all seemed plausible. So I subscribed to all five, planning to unsubscribe from the lower levels as I got more confident. In fact, I subscribed to them several months before I finished the textbook, so I had a backlog built up before I started reading any of them.
So I started working through my backlog of the 2-to-4-year-old fiction level, こどものとも年少版. (Which means something like “child’s friend early years edition”?) It was surprisingly hard, in some ways harder for me than later levels: it uses an awful lot of onomatopoeia (which Japanese uses much more than English in general), and I’m fairly sure that some of the speech forms are somewhat nonstandard parents-talking-to-kids forms rather than what I’d learned in grammar books. Fortunately, the books were totally charming, and while I wouldn’t want all books to be as repetitive as those ones are (a lot of doing the same thing on different pages with different numbers or colors or animals or whatever), it really helped me to have the same sentence structure and half of the same words to cling to while figuring out the rest of what’s on the page. And I’ve gotten a lot better at reading books in that series over the intervening months; the onomatopoeia words are even starting to stick.
Once I made it through my backlog of books at that level, I started on the next level: ちいさなこどものとも (little science’s friend?), nonfiction for 3-5 year olds. This was a great level for me: the sentences didn’t have the word usage quirks that previous level had, and the sentences were a bit more interesting while still not requiring me to look up an overwhelming number of words.
About a month ago, I made it through my backlog of those (I’d been reading one every weekend), and moved up to the next level. It’s called こどものとも年中向き (child’s friend targeted at intermediate years?), and is fiction for 4-5 year olds. And I’m enjoying the transition: the books are a bit longer than previous volumes (28 pages instead of 24 with more words to a page), but my practice from previous levels is paying off, as is my memorization practice, so they’re not taking too long. I’ve only read three books from that level so far, but they’re really quite varied: one was a regular story that confused my a lot until I realized that some of the word endings were in regional dialect; one consisted of scenes from a train station that might have fit better in the science series; and one was a counting/animal story that, honestly, probably would have fit better at an earlier level.
I’ve subscribed to but not started reading two more levels after that (one nonfiction, one fiction, both going through age 6); for now, I’m staying subscribed to the earlier levels, but I imagine at some point I’ll unsubcribe to those and add a subscription to something still more advanced. Also, for what it’s worth, all of the levels I’m subscribed to are kana-only, so my kanji practice isn’t paying off here yet. Though it’s paying off in other areas: for example, it’s kind of weird looking over at the spines of my Japanese go books and realizing that I actually recognize most of the characters I see there, even the non-go-specific ones.
I’m still listening to JapanesePod101, of course (incidentally, they just added a Chinese sister site, if you’re interested in learning Mandarin), and I’m spending a lot of time (almost certainly an unproductive proportion of time) memorizing vocabulary in general and Kanji in particular. In particular, I basically haven’t skipped a day using my memory program since it went live almost 10 months ago. (I usually use it during my lunch break at work.)
Which has been an interesting experience: in particular, at first, I ignored some of Wozniak’s suggestions, and I’ve learned that I was wrong to do so. To be clear, I don’t claim to be following any of his algorithms at all—I’m sticking with the algorithm I outlined here—but there are recommendations he makes that would apply to my algorithm that I ignored. In particular, he suggests a floor of 1.3 for the exponent; initially, I figured I’d put in a floor of 1.0 instead. But, after a few months, that turned out not to work at all: it was taking more and more time each day to review stuff because, once an item got tagged as “most difficult” (not too hard with kanji), I’d review it every single day for a month, and that clogged up fast. So I bumped the floor up to 1.2, and things got better; I then figured I should stop reinventing the wheel and bumped it up to 1.3, and I’m glad I did.
I’m also doing a better job now of following his suggestion of breaking up items into small chunks to memorize. Before, I would list all of the readings of a Kanji as one item: e.g. for the question 問 I would list the answer “もん、と(い)question, problem; と(う)matter, care about”. But now I break that up into three pairs: Q: 問, A: もん question, problem; Q: 問い, A: とい question, problem; Q: 問う, A: とう matter, care about. That has several advantages: individual items are smaller (as Wozniak recommends), I naturally focus more on the readings that are harder for me to remember, and I’m testing myself on something that actually matters when reading instead of an abstract skill. (I.e. I will encounter 問う when reading, but I will never be in a situation where it matters if I can list all the endings that you can stick after the Kanji 問.) In particular, the previous method wasn’t good at training me to tell whether, say, 上る was the reading のぼる or あがる. (It’s the former, the latter is written 上がる.)
Also, I made another Japanese-specific change while breaking up the kanji into multiple questions: I started writing the On readings (derived from Chinese) in katakana and the Kun (native Japanese) readings in hiragana. (So the answer to 問 is really モン.) It’s actually usually pretty obvious whether a reading is On or Kun, so that’s not important from a memorization point of view, but it meant that every day I was exposed to hundreds of katakana characters, so my katakana recognition speed has increased dramatically. (Incidentally, if any of you are learning Japanese, a recommendation: learn how to use your keyboard input method. Under Linux, you can convert a word to katakana by hitting F7; under OSX, by hitting control-k.)
Another surprise: I’d sort of assumed that some sort of geometric series magic would mean that I would be able to keep adding items to the database without increasing the amount of time I need to spend reviewing each day. Which, if you think about it for a minute, isn’t the case at all: e.g. if all items are at exponent 2 and I never make a mistake, then every day I need to review all the items I added yesterday, all the items added 2 days ago, all the items added 4 days ago, all the items added 8 days ago, all the items added 16 days ago, etc., and the growth here is unbounded. (Or rather, is bounded only by my lifespan!) I don’t think this is a big problem, but it might be; it does suggest that if I have too many items with small exponents then I’m in trouble. I hope that that problem will naturally ease: there’s a limit to the number of Kanji I have to memorize (I’m almost halfway through the official common usage Kanji list), and as I start reading more, I’ll get exposed to vocabulary more frequently in other contexts, which should manifest itself by the vocabulary seeming easier from the program’s point of view. We’ll see how it goes; if it gets too bad, I’ll cut down on the forced memorization and spend more of my time just reading and not worrying much about words I don’t know.
I had plans to quickly spiff up this application and make it multiuser, but that didn’t happen: basically, it became useable shockingly quickly, and I really didn’t have much of an impetus to improve it past that stage. It’s amazing what I’ve managed to leave out: for example, I assumed that I would have to implement a search functionality early on. But part of the basic Rails CRUD functionality is a URL that lists all the items, and combining that with browser search still works acceptably for search even though I’ve got over 3000 memory items listed. Or I assumed that I would have to secure it (and probably naturally add multiuser functionality as part of that) to get it useable while at work or travelling, but ssh tunnelling to an unsecure deployment was working fine for me until I got my new iPod and wanted to be able to use the program from the iPod’s web browser.
That’s changing now: aside from the iPod issue, I’ve recently gotten a bit frustrated with some UI elements, Miranda has shown some curiosity in using the program, and I just finished reading the paper version of the third edition of the Rails book. So now I’m pretty excited to start up my tinkering again! And in fact I started that last weekend (I continue to be impressed at how easy it is to write functional tests in Rails, incidentally), and I plan to continue with that on future weekends until the program looks/works a lot better. So: Jim and Praveen, I apologize for the delays, I’ll have a multiuser version available soon if you’re still interested! And anybody else who is interested, let me know; I’ll announce it here when it’s ready for use by people other than myself.
converted blog to utf-8
May 26th, 2009
I just tried to write a blog post containing some Unicode characters (I was blogging about 日本語 learning), and found that WordPress helpfully converted those characters to question marks. After digging around, I ran into this web page describing the problem (see also this thread): basically, if you created your database in a pre-2.1.3 WP, then your database has remained in Latin-1 all these years. Oops.
I tried a few workarounds, including blowing away the database and restoring from the WP export format (including a detour through editing a php.ini file to allow uploads larger than 2MB); I still kind of think that might be the right approach, since I’m now worried about what further problems might be lurking, but the restore seemed like it was taking too long so I gave up and killed it. Ultimately, what I did was take my mysqldump backup, replace the occurrences of CHARSET=latin1 with CHARSET=utf8, and reimport it. This probably doesn’t work in general – see this post for some subtleties – but I’m hoping it worked for me. (In particular, I doubt there are too many places where I’d used non-ASCII Latin-1 characters.)
I think things are working fine now, but please let me know if you notice anything weird…
random links: may 26, 2009
May 26th, 2009
-
Malcolm Gladwell on spaghetti sauce: the power of choices, of market segmentation.
- Two on folded paper: pictures by Simon Schubert (via @KathySierra) and a TED talk by Robert Lang on the origami that modern math and computers allow us to produce.
- An abandoned island city. (Via @japanesepod101.) Or, if you want a whole blog about “abandoned man-made creations”, try Artificial Owl.
-
Whereas if you’re in the mood for artificial cities, I recommend Shamus Young’s Procedural City:
- In Defense of Eye Candy. (via @TheOtherAlistai.)
- Raganwald on optimism.
- AR⊗TA: Artisanal Retro-Futurism crossed with Team-Scale Anarcho-Syndicalism. (I’ve posted my sticker on the wall next to my office!)
- A nice outline of how to use Kanban in software development.
- Optical illusions: a whole gaggle and a single particularly-stunning (and possibly baseball-related) example.
- Planetary Astronomer Mike explains gaps in asteroid belts.
- A nice set of pictures of deep-sea creatures.
- Meditations on the Garbage Bin.
- Brian Marick and I aren’t the only people in my blog/twitter stream thinking about Actor-Network Theory.
- The current short art game making the rounds: Today I Die, by Daniel Benmergui. (If you, like I, have trouble getting anywhere at the start, try keeping one of the jellyfish alive for a while.)
- A beautiful bit of interactive aleatory music. (Via Dan Bruno.)
- Charles Jacobs on management and brain science. I thought the bit on the ineffectiveness of traditional feedback mechanisms was particularly interesting.
-
I’ve actually never played Starcraft, but this video made it seem about as interesting to watch as most sports I see on TV. (Via Lungfishopolis.)
- Tokyo stereographic projections.
-
A Wolf Loves Pork. (Via Pink Tentacle.)
- Four Short Films.; I couldn’t pick one to highlight, so you’ll have to click through to watch them.
routinization, inscription, and facts
May 25th, 2009
I can’t say I’ve internalized (routinized? inscribed?) Latour’s Laboratory Life yet, but in the mean time I present you with three quotes on routinization, inscription, and facts:
To counter these catastrophic possibilities, efforts are made to routinise component actions either through technicians’ training or by automation. Once a string of operations has been routinised, one can look at the figures obtained and quietly forget that immunology, atomic physics, statistics, and electronics actually made this figure possible. Once the data sheet has been taken to the office for discussion, one can forget the several weeks of work by technicians and the hundreds of dollars which have gone into its production. After the paper which incorporates these figures has been written, and the main result of the paper has been embodied in some new inscription device, it is easy to forget that the construction of the paper depended on material factors. The bench space will be forgotten, and the existence of laboratories will fade from consideration. Instead, “ideas,” “theories,” and “reasons” will take their place. Inscription devices thus appear to be valued on the basis of the extent to which they facilitate a swift transition from craft work to ideas. The material setting both makes possible the phenomena and is required to be easily forgotten. Without the material environment of the laboratory none of the objects could be said to exist, and yet the material environment very rarely receives mention. It is this paradox, which is an essential feature of science, that we shall now consider in more detail. (p. 69)
The production of a paper depends critically on various processes of writing and reading which can be summarised as literary inscription. The function of literary inscription is the successful persuasion of readers, but the readers are only fully convinced when all sources of persuasion seem to have disappeared. In other words, the various operations of writing and reading which sustain an argument are seen by participants to be largely irrelevant to “facts,” which emerge solely by virtue of these same operations. There is, then, an essential congruence between a “fact” and the successful operation of various processes of literary inscription. A text or statement can thus be read as “containing” or “being about a fact” when readers are sufficiently convinced that there is no debate about it and the processes of literary inscription are forgotten. Conversely, one way of undercutting the “facticity” of a statement is by drawing attention to the (mere) processes of literary inscription which make the fact possible. (p. 76)
A fact only becomes such when it loses all temporal qualifications and becomes incorporated into a large body of knowledge drawn by others. Consequently, there is an essential difficulty associated with writing the history of a fact: it has, by definition, lost all historical reference. (p. 106)
Can we profit from focusing on objects/processes that “facilitate a swift transition from craft work to ideas”? I spent a few pleasant hours this afternoon doing some Rails programming; that framework shines because of the small amount of craft work necessarily to see a manifestation of your ideas. Does a software framework count as an “inscription device”? Does a programming language? Does a compiler, an interpreter? If not, is there some generalization of that concept that we can use here?
Agile processes value a swift transition between the programmer’s craft work and the Customer’s ideas. (A transition in both directions, I should add.) What are the inscription devices here? Ironically, one of the key mechanisms that agile uses to speed this transition is to remove certain inscription devices, or at least inscriptions, in favor of people talking directly to each other.
Can we relate tests to inscriptions and inscription devices? Test runs can certainly lead to thousands, millions of inscriptions over the course of a day; most of those inscriptions are internal, in that the software is noting that an assertion passed, but I label them as inscriptions nonetheless. They’re a very good form of persuasion; if you’re on a project where test runs act as a reliable safety net, then your worry level decreases, you can treat the software’s behavior as a “fact”, and spend time in idea land. Until, of course, a test failure (or, much worse, a failure that your tests didn’t catch) undercuts your software’s facticity.
I’ve been pretty obsessed with A3 reports for the last few months, which are certainly a form of inscription. And one of the strengths of the process is the extent to which the A3 report doesn’t serve as a source of persuasion, the extent to which the “sources of persuasion seem to have disappeared”: if the process is doing well, it’s a summary of facts to which all participants agree. Or have the sources of persuasion disappeared? Perhaps better to say they’ve been distilled down to a trace, as with a scientific paper; I don’t want to underestimate the importance of that trace.
I don’t suppose I can relate this to video games somehow? One issue that I struggle with, especially in games with a large variety of techniques to reach a goal, is how to internalize the various gameplay options that are available to me. Most of the time, I end up leaning on a few standard ways of progressing through a game’s levels; I suspect my experience would be richer if I had a broader tapestry of “facts” to choose from in the form of live tactical (or, better yet, strategic) options. What can games do to help me reach this state? What inscriptions can they present me with to ease this journey? How can I modify my own play styles to reach this state?
christopher alexander on our birthright
May 17th, 2009
The third volume of The Nature of Order, while very good, didn’t have the same impact on me as the earlier volumes did. Having said that, this bit from the conclusion is giving me something to think about:
And in all this that I observe, when I talk to politicians, to townspeople, to developers, when I watch the reaction in the newspapers, when I observe the studied (and to me frightening) neutrality of the journalist preparing to write his story, the most frightening thing of all is the loss that people have of their own feeling.
They no longer know what is inside them, they no longer know what they do know. That is the birthright I refer to, that is the birthright which is being lost.
The birthright being lost is not only the beautiful Earth, the lovely buildings people made in ancient times, the possibility of beauty and living structure all around. The birthright I speak of is something far more terrible; it is the fact that people have become inured to ugliness, that they accept the ravages of developers without even knowing that anything is wrong. In short, it is their own minds they have lost, the core, that core of them, from which judgment can be made, the inner knowledge of what it is to be a person, the knowledge of right and wrong, of beautiful and ugliness, of life and deadness.
And since this inner voice is lost, stilled, muffled, there is no possibility—or hardly any possibility—that they can cry out, “Oh stop this ugliness, stop this deadness which floods like a tide over the land.” They cannot do that successfully, too often they cannot even cry out, or let the cry be heard, because the source of such a cry has almost been stilled in them.
(A Vision of a Living World, pp. 681–682)