[ Content | Sidebar ]

dbcdb

August 27th, 2005

I wish that I knew more about certain aspects of modern computer technology, espcially information-management aspects of technology. Examples of things that I wish I knew more about:

  • Java.
  • Ant.
  • Eclipse, especially its automated refactoring tools.
  • How to write a web page that doesn’t look like it was written a decade ago.
  • Web pages that accept input.
  • Web pages that are generated on the fly. By a trendy new language (i.e. not PHP); probably Ruby on Rails.
  • XML.
  • XSLT.
  • RSS/Atom.
  • Databases.
  • FitNesse.
  • Apache.
  • Web services.
  • Subversion.
  • GUI creation and design.
  • AJAX.

Also, there are things about my own information management that bother me. Examples:

  • I give information that I’m interested in to others (e.g. book ratings to Amazon) without keeping a copy of such information myself.
  • Book links in my blog are ugly, and point to an outside source.
  • I’m using Windows to get at my iPod, and it’s not as easy as I’d like for me to edit both its contents and the presentation of its contents.
  • It would be nice to keep (and make available) a list of books that I own or have recently read.

(Whenever the above says “book”, read “book / cd / video game / dvd.”) It would be nice to be able to fix some of these issues, while brushing up my agile development chops at the same time. (Especially in ways that they aren’t getting brushed up at work.)

Last winter break, I started on a project to address some of these issues. Which failed abortively, its only tangible output being a series of posts here on Java. Analyzing this with my finely honed agile management skills, what went wrong? Some things that I didn’t pay attention to:

  • Sustainable Pace. Realistically, I won’t have any time during the week to program on this. (Plan, maybe, but not program.) If I’m lucky, I’ll have a couple of hours at a time in the weekend to program. When I started, I was using a technology that I was unfamiliar with (Java GUI programming); I had a free week to learn about this, which probably would have been enough if I hadn’t spent a lot of that time playing GTA instead of programming. Which was the right decision, but it meant that I never got the product to a useable state that week, even in embryonic form, and it was really hard to make further progress once work started up again. So this time, I have to plan all of my work to fit into two-hour chunks that I can work on every week or two.
  • Frequent Releases. It would be even better if, at the end of each two-hour chunk, I could use the resulting functionality to do something new, that would be reflected on this web site. It would also be acceptable if there were changes that led to the same web site being produced in a different way: they wouldn’t be visible to the outside world, but they would have valuable for me in my Customer role (somebody who wants to produce a web page), not just me in my programmer role.

Putting these lessons together, I can’t get too hung up on the final technology that I’ll use. Which is probably correct for other reasons: eight months ago, I was thinking in terms of Java+XML as a final technology, while now Ruby+database seems more likely; once the implementation gets that far, my technological goals will probably have changed again! What I need to do, instead, is come up with an initial story that I can implement in two hours or less, that will result in a change that is visible on my web page, and that will hold up to potentially drastic technology changes under the surface.

If there’s a change on my web page, HTML will have to be generated; if I can carry it off in two hours, I will have to generate the web page by hand. (Or something morally equivalent, hard-coding strings into a program.) But it would be nice if, a year from now, the same link led to a web page that was generated on the fly, instead of a static web page. So let’s not have the link end in .html.

Fortunately, Apache has a tool (mod_rewrite) that allows you to manipulate how you generate HTML in response to a given URL. So there’s my first story: I’ll write a web page by hand, hide it somewhere on my web site, and learn enough about mod_rewrite so that the link http://www.bactrian.org/dbcdb/2 causes the contents of that web page to be generated. (The ‘2’ at the end instead of ‘1’ is because of the order in which I plan to implement the early stories.)

What shall I use to plan this? I’ll write up some stories, and put them somewhere, as well as a list of technical / planning tasks that don’t have direct Customer value that I’ll address as necessary to complete the stories effectively. I’ll ditch the concepts of iteration and releases, though: every story will be really short and lead to a release, so there’s no room for additional layers. Most of the Planning Game will go away. (I don’t think I’ll bother estimating my stories, either: I’ll just have a note as to whether or not I think it’s necessary to split them before implementing them.) No Pair Programming, obviously. Not much in the way of measurement artifacts, but I will make my list of completed stories available, with dates. And I’ll add a dbcdb category to this blog, so that I’ll feel embarrassed if I don’t work on this project enough to create posts that justify that category.

Hopefully I’ll implement the first story this weekend; if not, then next weekend. I’ll also try to get a few dozen initial stories written up over the next week or so. I’ve put up the list of motivations already; right now, it only contains information that’s in this post, but as I learn about more technologies (or stop caring about technologies), I’ll update it as appropriate.

stopped-up sink

August 25th, 2005

In our house, as in apartments that we’ve lived in, sinks periodically get clogged up. If the drain plug gizmo (what is the name for those things?) is removable, we try removing it and seeing if we can get stuff out of there, but in the last few places we’ve lived, they haven’t been. (There must be a way to remove the ones that don’t just pull out, but I don’t know what it is.)

So we use Drano. And then we use Drano a second time, because the first time never helps. If we’re lucky, the second time works.

But the second time didn’t work too well the last time we had to unplug the upstairs sink, so it got clogged again pretty quickly. Liesl got sick of this last night, and the plunger happened to be up there; so she used the plunger on the sink. Which worked great!

The question here is: why did it take us so long to think of trying this? How did we get this mental block where the plunger is the obvious thing to try for a stopped-up toilet, but we’d never thought of trying it for a stopped-up sink? Sigh. At least now we know.

pinkwaters

August 22nd, 2005

In grad school, Jordan introduced me to Daniel Pinkwater’s books. And they’re great! Well, many of them are great, and almost all of them are at least entertaining. (He’s written a lot of books.) For an introduction, I highly recommend 5 Novels; Jordan will be peeved if I don’t mention Lizard Music, and among his most recent work, I can not praise Bongo Larry highly enough.

But he writes enough books that I don’t feel compelled to go out and buy all of them. So, over the last year or two, I’ve been going through my local library’s collection of his books. (Of which they have most, but not all.) Eventually, though I ran out of his books. But right next to them were two books by Jill Pinkwater, his wife. (And illustrator of many of his books, though he actually illustrated his own early books.)

And they’re really good, too! Both Pinkwaters’ writings have quite a bit in common: very funny, in a world where things that we would consider surreal are quite commonplace, about people who would be considered social misfits in our world. In Jill Pinkwater’s books, some of their social misfit status leaks into the books: it’s quickly overcome in Buffalo Brenda (which I could imagine is a Daniel Pinkwater book), but Tails of the Bronx is a good deal more serious. Looking through Amazon, I see a few more (Cloud Horse, The Disappearance of Sister Perfect, Mr. Fred, a boring-looking cookbook). I don’t think my library has copies, but that’s what interlibrary loan is for…

paris arcades

August 21st, 2005

The Arcades Project has been sitting on my to-read shelf for a year or so. (I’ve finally started reading it, about which more later.) One thing that’s been bothering me since I heard about the book, though: I’ve been to Paris several times, and I don’t recall ever seeing an arcade there! Have they all disappeared, were there only a few to begin with, am I blind, or what? I have fond memories of arcades in Cleveland (though the terminology {a,be}mused me when I was younger), I bought a copy of The Wombles at a store in an arcade in London (why are those books out of print? Legions of loyal British readers, have the wombles passed out of the country’s imagination, were they ever popular?), but in Paris, nada.

It’s certainly possible that I’m forgetting having seen arcades in past trips to Paris; we did walk through one on this trip. Cour du Commerce St. Andre, the map suggests. Right near Le Procope, a centuries-old restaurant famous to us for a pasta recipe named after it (I should post that one of these months), where we had a quite nice meal, with quite good mozzarella (not as good at at La Ferme des Mathurins, but that’s hardly a pan) and a lovely muscat wine.

Anyways, fairly early on in the book there’s a quote on the matter saying

The most important of them are grouped in an area bounded by the Rue Croix-des-Petits-Champs to the south, the Rue de la Grange Bateliere to the north, the Boulevard de Sebastopol to the east, and the Rue Ventadour to the west.

So I pulled out my best-beloved map, and looked it up. After some amount of puzzlement (starting from the fact that Rue Croix-des-Petits-Champs runs north-south, so listing it as the southern boundary seems a bit quixotic), I found the area in question; and right there on the map, running through Rue de la Grange Bateliere, we see some streets bounded by dotted lines: arcades! Looking around, there are, in fact, several “streets” on the map that either are bounded by dotted lines or a sort of dashed lines; the legend says the former are tunnels while the latter are arches (“Passages sous voute”, which doesn’t mean anything to me); the next time I go back, I’ll have to figure out what the distinction is. Maybe the tunnels don’t have glass ceilings, and hence aren’t true arcades? (The one I did see in person is marked as an arch, and it did have a glass ceiling.)

Actually, it turns out that there’s more to be learned from that map, even though I’ve looked at it hundreds of times. A little further southeast, for example, I found some streets outlined in red, between the Forum des Halles and the Pompidou center; the legend confirms the obvious guess, that they are pedestrian streets. With another clump in the Quartier Latin near the river, near where we stayed this time and home to lots of indifferent restaurants and a lovely little artistic knick-knack/sculpture/toy store called Pays de Poche, at 73 Rue Galande. Are there any other clumps that I don’t know about? I didn’t see any after a cursory glimpse.

Returning to that clump of arcades on the map, my first reaction was that it’s in an area I’m not that familiar with, so no surprise that I wasn’t aware of Parisian arcades. Except that even that isn’t true: the time before last, we stayed in a hotel right near (maybe even on? I’m embarrassed to say I can’t remember) Rue de la Grange Bateliere, so we must have walked right past these arcades (/tunnels) dozens of times. Sigh.

And it must be true that some of the arcades have disappeared: apparently the Passage de l’Opera was destroyed to make way for Boulevard Haussmann, and just north of that are some department stores which may be located where arcades once were. (Or may not; maybe I’ll learn that later in the book.)

I should look and see what Christopher Alexander has to say on the subject. Arcades bring together a few nice ideas: pedestrian thoroughfares through buildings, that (like streets) have destinations (e.g. shops) on them, and that have glass ceilings. All of which are fine ideas. In the building where my father works (Kettering, at Oberlin College), there’s a pedestrian thoroughfare cutting right through it, but it really does serve just as a tunnel, with a normal ceiling and only a few doors on the sides. In Harvard Square, there’s a building I used to walk through quite frequently (Holyoke Center? I can’t quite remember what it’s called) that does have many useful doors (including shops) adjoining it. I don’t think it had a glass ceiling, though (which Google satellite maps seems to confirm), but it had a high enough ceiling that it gave much the same effect. For that matter, the Science Center also fits those criteria fairly well (and it does have a glass ceiling); it opens up in a way a street doesn’t, however, so there are fewer doors opening off of its main thoroughfare.

And Paris has adopted the “glass ceiling” idea to stunning effect the last few decades. (Side note: Google maps doesn’t cover France! How lame!) The Musee d’Orsay is one of my favorite buildings in the whole world. I can’t say that I’m all that thrilled by either the Louvre’s pyramids or the architecture of the area underneath it, but the glass ceiling does make it a wonderfully open area, and it’s a lot nicer than the courtyard above it. And the enclosed sculpture garden on the north side of the Louvre is my favorite part of the museum (at least architecturally speaking, though I enjoy it artistically speaking as well).

I should start noticing courtyards more, and figuring out what differentiates ones I like from ones I don’t like.

gran turismo 4

August 19th, 2005

Gran Turismo 4 is the first of that storied series that I’ve played. It’s almost the only driving game that I’ve played this generation (the exceptions being the forgettable F-Zero GX and a few rounds of Mario Kart with friends): I got pretty burned out on driving games last generation, and I needed some time off from them. I enjoyed driving games at the start of last generation: Extreme G was actually the first Nintendo 64 game I bought (admittedly, only because all the games I actually wanted were temporarily unavailable), about which I have no regrets, and Wave Race was quite nice, if not the crown jewel that it is frequently claimed to be. But it took me a little while to notice that IGN kept on giving 9 ratings to racing games that were at best good executions of a genre not known for innovations; by the time I figured that out, I’d lost my taste for driving games.

And GT4 was a good for my one driving game of this generation. I have no idea how they got graphics like that out of a PS2. The physics model seems better than in other driving games I’ve played: it’s the first game I’ve played that modeled drafting, for example. And I learned a lot more about driving from this game than from other games: in less realistic games, you just have to memorize the course and keep control at completely unrealistic speeds, and in other more realistic games I’ve played, I still succeeded by sticking to the insides of corners and braking enough to stop myself from skidding. But my approach to cornering (and in particular to using the whole width of the track) had to completely change when playing GT4, and honestly I still feel like I’ve only scratched the surface there. It helped that they had a nice set of graduated lessons in the form of driving tests to hone your skills.

Some parts of the game play didn’t work so well, though. The way a racing game traditionally progresses is as follows: you start off on easy tracks against bad computer opponents. After getting used to the game and the track, you win; you move on to harder-tracks and/or harder opponents. You frequently get some sort of better car as a side effect of winning; that, combined with your increased skills, mean that the new difficulties are enjoyable but surmountable.

This is cliched, perhaps, but not because it’s a bad idea: driving games by their nature give you a limited design space to play in, and there are only so many ways to get a good difficulty gradient in that design space. The GT series, however, is somewhat famous for ways in which it tries to enlarge the design space: it has lots and lots of cars and lots and lots of tracks (most real-world, some fictional). Which is quite impressive; it doesn’t push my particular buttons, but I acknowledge that it’s a significant accomplishment.

The thing is, though, it makes your progress through the game a good deal less linear. Your choices in cars and tracks start off somewhat restricted (by your budget and lack of driving licenses), but even at the beginning you have many choices, and the number only grows. There are many ways that a player can approach this; I decided to treat it mostly like a normal driving game, and start by playing the first designated beginners’ race.

This was fun: with the only car that I could afford and my initial incompetence and ignorance, I didn’t do well in the races at first. But as I learned the tracks and got better at driving, I placed higher, earned more money from my finishes, was able to upgrade my car (the game has a certain RPG-ish aspect), and with a combination of better skills and better car, was able to win that circuit.

So far, so good; what’s next? There were multiple next-level beginner’s courses (for the different engine positions that your car could have); I picked the one that matched my car’s engine. Like the previous circuit, it started out badly, but started to get better. The thing is, though, it didn’t get better very quickly; some of that could be blamed to my skills (though I don’t think I’m any worse at this sort of thing than your average video game player), but if half the field is pulling away from you on straightaways, ultimately you need to upgrade your car. And the fourth- and fifth- place money that I was earning wasn’t getting the job done fast enough.

So what was I supposed to do? I could have gone back to the easier circuit and earned more money from winning it again, but that would have been boring. So I looked around at other circuits; I found a “Japanese cars of the 90’s” one that I could enter, and surprisingly, it turned out that it was easier than the other circuit that I’d been going through, was in fact just at the right difficulty level for me.

So that was a good outcome; better if the game had made it easier for me to find an appropriate circuit to play, but at least I found one eventually. And with the prize money I got out of that, I was able to upgrade my car to an appropriate level to, with a bit more effort, win the previous circuit that I’d been trying.

The story doesn’t end there, however: when I won that Japanese circuit, I didn’t only get money, I got a car. You get all sorts of random cars when you win circuits; most of them are interesting for car collector geeks but useless for racing terms. This one, however, was very powerful. It had a different engine position than the car that I had been using, so I tried the second-tier beginner’s circuit that was appropriate to that engine type, and I found that I could blow away my opponents, even when driving very sloppily. Which is no fun, but what was I supposed to do? I suppose I could have tried to buy a worse car of that engine type, in an effort to get a reasonable challenge level, but that would have felt perverse; basically, that circuit was turned into a loss for me. And that wasn’t an isolated occurrence: that same car let me blow away several other races as well.

The story here wasn’t all bad: while I soon found an even more obscenely overpowered car that I could use to blow away more opponents, that mattered less in the higher circuits. What started to happen was that I would screw up on corners, allowing the other cars to gain a significant lead on me, which I would proceed to eliminate on the next straightaway, leaving us with a more or less level playing field. So to win the races, I had to memorize the courses, learn the appropriate speeds and locations to enter the corner, and execute correctly almost all the time. Classic good, challenging gameplay, in other words.

Ultimately, there’s a huge amount of depth to this game, and a lot of good gameplay to be found; I’m quite glad I bought it. The bad design of the player’s progression through the game is a serious flaw, however. Like several games I’ve played recently, I could have probably enjoyed playing this game for longer than I had, learning all of its intricacies, but given the breadth of video game choices that I have, I felt it was time to move on.

crayon shinchan

August 16th, 2005

I ran into a manga called Crayon Shinchan a few months ago; I used to be a bit embarrassed at how funny I found it, but I’ve given up on that, and just accepted that it makes me laugh out loud on a regular basis. (I mean that quite literally: I really do inadvertently laugh out loud a couple of times over the course of each volume.) It’s about a young boy (around six or so?), with a truly remarkable combination of innocence, bad behavior, and inappropriately adult remarks. The latter could be grating, but it works very well here.

It’s in an unusual format, at least based on my experience. I’m used to Japanese comics broken up into reasonably coherent episodes / stories that are tens of pages long, and to book-length Japanese comics. And I’m used to American comics in both of those formats, as well as newspaper-length individual strips. (I recently ran into a Japanese comic with more or less the latter format, Azumanga Daioh; is it common in Japan?) Crayon Shinchan, however, is divided up into three-page episodes. (Which are typically loosely connected into story arcs that are about 10-15 episodes long.) I’m not sure what to make of this, but it suits the mood of the comic; I don’t think it would hold up as well with longer stories, but three- or four-panel strips would be too short.

The variety of manga that’s available in the US these days is pretty impressive. I was at a Borders a week or two ago, and they actually had a rather better manga selection than the local comic book stores. I think part of the deal there is that the comic book stores skew fairly strongly towards male customers, while Borders doesn’t have that bias, and there are a lot of manga published in the US these days that are targeted towards teenage girls. (For that matter, I suspect that the Japan-oriented male youths of America don’t necessarily spend much time in comic book stores, either…) Some of which I read; I probably shouldn’t admit to liking Azumanga Daioh or Love Hina, but I do! (I guess it’s okay for me to admit to liking Banana Fish, though.) I’m sure that there’s still a vast amount of material that isn’t making it to the US, but three or five years ago I never would have dreamed at having access to the current range of material.

sudoku

August 12th, 2005

One of my coworkers pointed me at The Daily Sudoku. I’ve tried and enjoyed a few; I’m not sure how long I’ll keep it up, but I’m not stopping yet. So far I’ve only tried ones rated easy or medium (and, honestly, I can’t tell the difference between the two levels); apparently the hard rated ones are a significant change. I’ll be curious to see if they make me think in interesting ways, or if they just make me go through long tedious searches to make progress. (I tried to order the book from the site – it seems clearly worth two pounds, but I ran into some strange paypal glitch. Sigh. I thought this electronic payment stuff was supposed to work well by now?)

It reminds me of some other puzzle, but I can’t think of what. The common idea is this: say you have boxes where the choices for each box are 12, 12, 1234, 1234. Then you know that the first two boxes use up 1 and 2, even if you don’t know the order, so you can reduce the choices to 12, 12, 34, 34. Where else is this idea important?

copyright office

August 12th, 2005

Just what is the copyright office thinking? I can’t imagine they’re doing this out of bad intentions, but it’s pretty depressing that they’re apparently clueless about this sort of thing…

donkey kong jungle beat

August 9th, 2005

It’s been a while since I discussed a video game I’ve finished, hasn’t it? Not because I haven’t been finishing games, I’ve just been busy writing about other things. (The video games du jour are Shenmue II, on my new Xbox (about which more later), when Miranda is around, and the stunning Resident Evil 4, when she isn’t.)

Oldest on the stack of finished games is Donkey Kong Jungle Beat. I was pretty excited when Donkey Konga, a music game with drums, was announced for US release: another sign that publishers are less likely to prejudge US customers’ tastes in Japanese video games. But when the game itself came out, I ended up not buying it: the song list was full of pop songs that I didn’t particularly like. (I’d much prefer video games with good music that I’ve never heard before; though why did I claim in the linked-to post that DDR is a Namco game? It’s by Konami.) I wouldn’t mind drumming to the Zelda theme, but that’s not enough to get me to buy the game.

Actually, what’s really a sign of the penetration of Japanese games into the US is that there are two drumming video games available here, the other being Namco’s Taiko Drum Master. Which has the Katamari Damacy theme in it, but even that isn’t enough to get me to buy the game by itself. (Even if Miranda and I still spontaneously sing it occasionally.)

But Nintendo had another use for their drum controller: a side-scrolling platformer called Donkey Kong Jungle Beat. Which had been getting positive mentions ever since it appeared at an E3; being a sucker for weird game ideas, I wanted to give it a try.

Not very good, I’m afraid. Ultimately, I just don’t have enough nostalgic fondness for 2d platformers to normally want to play them in preference to today’s much richer games: they’re not the sort of timeless simple pleasure that, say, a good puzzle game (Tetris) is. And while there are good games with very simple controls (e.g. Super Monkey Balls), the controls in DKJB felt to me like a gimmick. There are only three or four things you can do at any point, so when you get to a weird creature, you just hit the controls at random a little, find the magic effect of clapping (or whatever), and continue. If that doesn’t quite work, you have to manipulate the drums to jump in the air at the right location, and then clap. Whoopee.

And, to add injury to insult, my hands really hurt after playing it. Tip for all of you married people out there: take off your wedding ring before trying this game. But even after that, I suspect your hands need some toughening up. Miranda liked it enough that I went through the first eight twelve levels (in all of 4 hours or so: not a hard game unless you want to get as many bananas as possible), but I didn’t feel like replaying earlier levels to get better scores on them to unlock the last four levels.

A decade and a half ago, the gameplay would have been fine, and I would have had limited enough options (for reasons of game availability and finances) that I probably would have been happy to replay the levels over and over to earn the top medals on all of them. I’m happy that I can set my standards higher now.

code reviews, tasks

August 6th, 2005

I was unhappy with the result of our pair programming meeting for various reasons: we were all unhappy with how things were going, I was pretty sure that we were doing something wrong, but I didn’t know what it was. We’d adopted short-term measures to ease some of the pains, but I didn’t see them as leading to a coherent solution that I’d be happy with.

After thinking about it for a day, I decided that our changes were leading in a direction that I certainly wasn’t happy with: while I’m still not sure of the merits and demerits of pairing, I am sure that it’s good for us to spend more time focusing on the quality of our code, and to spend more time in general talking about code. If we’re going to pull back on pairing, we should still try not to give up on that goal: so I instituted a policy that all non-trivial checkins would require a code review. (If the code was entirely developed while pairing, that counts as the code review, of course.) Code reviews are probably not quite as good as pairing for quality control, but they’re a lot better than nothing: I know that, when I was working on GDB, I got a lot of useful feedback from others’ code reviews, for example.

I felt better after that: people were talking more, the checkins were a bit cleaner. Not a lot cleaner, but that will come: editing, like any other skill, improves with practice.

A week or so later, we ran into another problem: the assignment that one of my team members was working on that week wasn’t done, it wasn’t clear to me when it would be done, and I wasn’t at all confident that I’d like the results when I saw it. (Of course, my lack of confidence may have been largely caused by my lack of information: maybe it was great code, I just had no easy way of telling.)

This wasn’t an isolated instance: when we estimated a story as taking a full week to accomplish, it would turn out to take more than a week most of the time. We were fooling ourselves with our estimates, and we were skimping on design: it’s one thing to be against “Big Design Up Front”, but that doesn’t mean that some amount of design isn’t appropriate.

And now a bunch of things clicked. I’d been aware for several months that we weren’t really planning in the XP way: the relevant issue here is that we were working exclusively in terms of “stories” (basically, features with user value that can be implemented in a week or less), but not breaking them down into “tasks” (individual technical steps necessary to implement the features, each of which can be accomplished in a single pairing session). When I first realized that we were doing the planning wrong, it wasn’t clear to me that this difference was a big deal, but all of a sudden introducing tasks seemed to solve several problems that we were having:

  • Breaking a long story into tasks should make it easier to accurately estimate the story’s duration, with a bit of practice: a six task story will probably take longer than a four task story, but that wouldn’t have been so obvious before breaking it up into tasks.
  • The process of breaking a story into tasks gives us a chance to talk about the story together and do an appropriate amount of up-front design.
  • If a task takes longer than expected (in particular, longer than a day), that’s an immediate warning sign that something unexpected has turned up. We can deal with the problem right then, by calling an impromptu design session and breaking up the task into smaller tasks as appropriate.
  • In the unhappy event that a story still takes longer than a week to accomplish, at least I’ll have a much better idea of its current status, because I’ll know what tasks have been accomplished and what tasks haven’t been accomplished.
  • It seems plausible that it will significantly improve our mood towards pairing: it’s not much fun showing up in the middle of somebody else’s project, working on it for a little while without really knowing what’s going on, and then leaving while that person continues. It’s a lot better if you come in at the beginning of a coherent project, work on it together for a few hours, and finish it.

We’ve been doing this for a grand total of a week now; it’s probably largely my imagination, but I’m a lot happier with how things are going. We actually had a pretty bad week in terms of completing stories (we were still underestimating how long long stories were taking), but the one problematic story was in much better shape: we’d finished 5 of the 6 tasks that we’d broken that story into, we knew the last task was turning out to be more complicated than we expected, so we found a coherent way to split it into two tasks.

In our weekly meeting on Friday, most of the stories were fairly well-defined, but one of them was pretty amorphous. So we spent about 20 minutes breaking it up into talks, talking about pros and cons, with lots of people chipping in about what they remembered about the different pieces of affected code. At the end, there was general agreement that the story was significantly less scary than it had seemed before we started talking about it.

And maybe it’s my imagination, but I think I’ve been enjoying pairing more. Yesterday, for example, I had a very pleasant time writing a really solid class. I particularly appreciated my partner’s winces whenever I chose a bad name for a variable: joke all you want, but little things like that are important. (Incidentally, we also tried out programming by intention some more, with good results.)

Not everything is perfect yet, but I’m much more optimistic than I was. We’re still underestimating large stories, but hopefully tasks will give us a better handle on that. Significant issues still remain with pairing: in particular, our differences in familiarity with different parts of the code and in programming background make pairing hard, but I can deal with that, and those differences will lessen over time. As long as we have a plausible path for improvement there, I’m happy.

On the one hand, I feel a bit silly that we didn’t start using tasks a lot earlier: I should have been paying more attention to what the XP books were saying, because the authors of those books have a lot of useful experience. (Incidentally, it’s fascinating reading the XP mailing list.) And I’ll certainly keep on rereading various XP books to find more mismatches between our practice and their descriptions that might shed light on problems we’re having. On the other hand, making mistakes is a classic way to learn, and for good reason: I have a much more active grasp of this issue than I would have if we’d done things right from the start.

My next management issue, aside from monitoring this one: reading about Scrum, to see if we can use that as a blanket methodology for the entire software team (i.e. my group, the other two groups parallel to it, and my manager’s group). It’s compatible with but less specific than XP, and explicitly addresses issues involving multiple groups; with luck, it will be something we can all get behind. But I have some reading to do to learn more about it, to see if I think it is a good match for current and potential problems that the larger group has.

pair programming update

August 5th, 2005

About three months ago, my team started seriously experimenting with pair programming. It’s been more than long enough since then for us to take stock, so we had a meeting three or so weeks ago to talk about our experiences.

The results were mixed, and really hard for me to get a grip on. Some good things:

  • Pairing did help the quality of our code.
  • More people know more about more of the system.
  • The daily standup meetings that we started doing at the same time as we started pairing helped me (as a manager) keep much better track of what was going on midweek.
  • Sometimes, pairing with the right person could save a lot of time debugging an annoying problem.
  • A pair seemed more willing to ask for help quickly than a programmer working alone.

I might have forgotten a few (I’m at home, my notes are at work), but that’s the basic idea. The last one, in particular, interested me: I wasn’t expecting it, though in retrospect it makes sense. After all, the macho programmer ethos means that a single lone programmer is loath to admit that he can’t solve a problem by himself; if two programmers both can’t figure something out quickly, though, they’re much more likely to figure out that they need outside help. (When appropriate, of course, especially when there’s specific knowledge that they’re missing.)

The bad side (again, I might have forgotten a few):

  • It wasn’t at all clear that we were more productive pairing than when working alone.
  • We didn’t look forward to pairing.

The first of those isn’t necessarily a show stopper: we agreed that we were willing to make a tradeoff of less code written of higher quality with more knowledge transferred, and that we weren’t in a situation where we needed to crank out as much code as possible. So it seemed plausible that pairing was a long term productivity gain for us; still, somewhat disconcerting, since the literature suggests that it should be clearer that pairing is improving our productivity.

The second, though, is a real problem: I got the feeling that we (I, certainly) wanted to enjoy pairing, but something really wasn’t working right. And I couldn’t figure out what it was.

Our conclusion was that we saw enough good things that we wanted to keep on trying. But we needed to leave more breathing space, at the very least. We decided to start by drilling down on our feelings of where pairing was more productive and where it was less productive, and then during our standup meetings, we’d use those criteria to figure out who would pair at all that day, not assuming that everybody would always pair.

(It’s getting late, and this is as good a stopping point as any; I’ll post a followup bringing the story up to date in a day or two.)

dan johnson, shanghai crab

August 1st, 2005

I was kind of bummed when Erubiel Durazo got hurt, and Scott Hatteberg’s performance has certainly been nothing to write home about this season. (I still have no idea why he’s gotten the contracts he has from Billy Beane.) But Dan Johnson’s performance has been a pleasant surprise: I’d literally never heard of him, but after 174 plate appearances he’s slugging .500. Who knows how long he’ll keep that up, but I guess he isn’t a complete flash in the pan: looking at his entry in the 2005 Baseball Prospectus, I see “Johnson is ready to step in and take Hatteberg’s job”, and they certainly got that right.

We’re watching the shanghai crab episode of Iron Chef right now. I’m used to seeing live seafood there (driving nails through the heads of pike eels thrashing around on the cutting board), though seeing the poor crabs put live into a hot wok was a bit much. A first for me, though, was a crab with its shell off, in the process of being disemboweled, and you could still see its heart beating…

livres

July 31st, 2005

I did some book shopping in Paris. A bit silly, in these days of www.amazon.fr, but old habits die hard. And FNAC is still pretty cool, though not quite as impressive to me now as it was the first time I set foot in it.

I bought most of Bruno Latour‘s books that hadn’t been translated into English, some comic books (standards: Tintin and Asterix), a few Barbapapa books for Miranda, and SGA1. I felt a little silly about the comic books, not about the ones I did buy (they are both deservedly classic series) but because I didn’t look for anything else: France is one of the great comic book-producing nations, and I walked by several good-looking stores, but I just wasn’t in a very inquisitive mood, I guess.

The new printed version of SGA1 turned out to be the same version that’s available online. Still, it’s nice to have a copy that’s easy to hold in your hand. Who knows when I’ll get around to reading it, but I suspect I will at some point over the next year or two (more likely it than some of the more experimental Bruno Latour books); I think/hope it should be at a level that I can read it without excessive effort, and it’s an important part of mathematical history. I don’t want to lose contact with math entirely, after all, and reading classic works seems like a good way to keep my brain active.

The whole Grothendieck reprint story has to be seen as a victory for the forces of good. I spent some time this weekend reading Free Culture, by Lawrence Lessig, and now I’m really depressed, but it’s great to see some people saying that the current situation is ridiculous and snubbing some of its more odious aspects.

The technical bookstore that I patronized seven years ago seems to have disappeared, more’s the pity. But it remains the case that general-purpose bookstores in Europe have much better math sections than their counterparts in the US. I’m not sure why that is, but I’m not complaining. It was fun browsing; a lot of familiar titles, and some new titles on familiar subjects. Nothing new and exciting that leapt out at me; in a decade or two, maybe I’ll go back and catch up on some of the advances in the field. Probably not, to be honest, but who knows what the future would bring; I’ve enjoyed spending the last two or three years catching up with (some of) the advances in computer science that I missed over the previous seven or eight years, after all.

programming by intention

July 29th, 2005

Ever since I read Refactoring to Patterns, I’ve been thinking that I should use Compose Method more. (I should really reread Smalltalk Best Practice Patterns to see what other low-level patterns I’ve missed.) But I’m too timid to perform quite that drastic surgery to the thicket of code that I’m working on.

I just finished Extreme Programming Installed, though, and the authors talk about an interesting way to develop your code so that the methods are nicely composed. It’s called “programming by intention”, and works as follows: whenever you sit down to implement a method, you simply write down a method call explaining what you want the method to do first, another one explaining what you want it to do second, etc., without worrying yet about whether there are, in fact, methods with those names. If there aren’t, you then go and implement those methods. (Again programming by intention, though it should stop after two or three levels.)

I tried this yesterday, and it was great! I wanted to write a method that parsed a series of data structures; I had some ideas about how the low-level details would work, but I decided to just put those out of my mind and program by intention. We were parsing a sequence of data structures for as long as data remained, and the conditions for when data remains were slightly nontrivial in this context, so I started by typing (more or less, details are changed):

  while ( dataRemains() ) {

Each data structure starts with a type field, and a length field, both expressed as a multibyte value that I hadn’t yet had to parse. So:

    int type = nextMultibyteValue();
    int length = nextMultibyteValue();

Next, we start printing out the data. We’d like to output a string representation of the type codes, so:

    printTypeCode( length );

After this, we needed to output the data as a sequence of bytes, with its length given by the number we just read; I already had code to do that, so I just called that code:

    printNextBytes( length );
  }

Once I’d done that, I implemented dataRemains, nextMultibyteValue, and printTypeCode; each of them was easy to implement now that I wasn’t thinking about anything else. (And I knew I wasn’t wasting my time because I’d already shown that, once those were implemented, I’d have exactly the functionality I needed.) And the resulting methods looked great (though Compose Method suggests that I should have gone further and extracted the entire body of the loop into a method, which probably wouldn’t have been a bad idea).

This dovetails very well with test-driven development. One important benefit of TDD is that it focuses your mind on doing one thing at a time: either you’re focused on writing a test to express your next goal, or you’re focused on getting the test to pass, or you’re focused on cleaning up your code. Programming by intention, in turn, helps narrow your focus during the second of those steps: while you’re getting the test to pass, concentrate on what you want your implementation to do on a conceptual level, then drill down and repeat.

Side note: in a recent post on the XP mailing list, Kent Beck talks about how top down / bottom up isn’t a very useful dichotomy for him. Which I agree with to some extent, but programming by intention suggests that a particular form of top down programming is very useful when programming on a small scale. I’ll have to think about the extent to which this is the case at other levels of XP: is top-down the way to go when you’re trying to get an acceptance test to pass, for example? (Probably on the design level, but not on the implementation level, because you’d go far too long without working code.)

upgrade finished

July 28th, 2005

I spent a little more time playing around with doing the upgrade piecemeal; it turned out that, while there were some pleasant groups of packages that came together in a clump of 10-20, most packages either were happy to be upgraded individually or were part of a huge clump that required a few hundred packages to be upgraded simultaneously. (Upgrade ftp, then libreadline has to be upgraded, then all other CLI programs have to be upgraded, and they pull in all sorts of random libraries to upgrade, etc.) And once that happens, you might as well upgrade everything. So I did; worked fine. (I’m still planning to go and look through my list of installed packages just to see what I should consider removing, though.)

I’m a little annoyed at their “Fedora extras” thing. At first, I was happy because it meant that I could get galeon from them instead of having to find it at another repository. (Good thing, too, because the repository I had been using for that doesn’t seem to have an FC4 version available.) But it turns out that they don’t bother to keep the extras repository in sync with their other repositories; they’ve done an upgrade of mozilla, since their last galeon upgrade, so right now I can’t install galeon at all because there’s no easy way for me to get the old mozilla version instead of the new one. Sigh. Still, they’ll probably work out the kinks over the next few months.

upgrading to fc4

July 26th, 2005

As threatened after my last OS upgrade, I’m upgrading to FC4 relatively soon after its release. This time, the release notes are very clear about the easiest way to upgrade: install a single RPM (which basically tells yum to look for FC4 packages instead of FC3 packages), and then do yum upgrade.

I’m actually not quite doing that: since I’m not sure how long it will take to download all that stuff, I’m trying to do it piecemeal. Which is sort of a fun game: sometimes, if I want to upgrade a single package, it just upgrades that package plus maybe a handful of others, but sometimes it indirectly pulls in hundreds of other packages.

Unfortunately, there’s some sort of version problem with galeon, my web browser of choice. (It’s included in their ‘extras’; maybe they don’t rebuild those as frequently as they should?) So I’m using firefox for now, which is fine. And there’s some sort of dependency failure with certain java-related packages; I’m not sure what the deal is with that, but for now I don’t mind just removing the packages in question.

I expect that I’ll be doing this over the course of the next week or so; gives me something to do, I guess.

(more baseball)

July 25th, 2005

Despite what I said a week and a half ago, maybe the A’s are going to make the playoffs; they’re tied for the lead (and about to take the lead) in the wild card, after all. Even the AL West title seems not out of reach right now. They will, of course, cool off eventually, but they’ve shown over the last few years that they’re more than capable of ridiculous second-half performance. (Is that luck, or is that a skill that some teams or players have? Any studies one way or another?)

Too bad that the Indians are going in the opposite direction, and are so much further behind their division leader…

howl’s moving castle, families

July 25th, 2005

We went to see Howl’s Moving Castle last weekend. Actually, Liesl and I went the weekend before that, to make sure it was okay for Miranda; we decided that it probably was, though we checked first with Miranda to make sure.

We all enjoyed it, though I don’t think it will end up as one of my favorite Miyazaki movies. One thing that struck me: while I have nothing against love stories, they’re all about falling in love instead of loving people. I don’t have any plans to ever do the former again. while the latter is a huge aspect of my life. And while the move had its love story aspects, they were muted, and even explicitly questioned at the end. Instead, the relationship aspects of the movie focused on building a family, something very dear to my heart. And quite a family it was, too: I really like the “collection of misfits banding together” trope, families as a group of people who have made an active choice to stay together. (This was something I really liked about the third volume of the Kushiel trilogy, too.)

On a related note, we watched Shrek 2 on DVD this weekend. It has a little bit of the “actively chosen family” theme in it. But it’s also about two people reaffirming that they are very much in love; I for one cried at the end of it. Again, Kushiel does this, in the second volume instead.

literate programming

July 24th, 2005

Prompted by Knuth’s delightful article “The Errors of TeX”, I just read his collection Literate Programming. (Which contains the aforementioned article, among others.) A fascinating read, for multiple reasons: Knuth is a really smart guy, whose opinions I very much respect, but he’s writing from a context that I frequently find very hard to understand.

The most dramatic example of this is the first article, “Structured Programming with go to Statements”. It was written in 1974, three years after I was born, and I never learned Algol 60 (which I think is more or less the language that the article uses). I understand that goto once was used much more, and I can imagine a world in which people had yet to decide that a few iteration constructs (for, while, and the occasional (in my experience very occasional) dowhile), combined with a sprinkling of break and continue, were good enough for 99.9 percent of your needs. And while I suspect that some of the other solutions he discusses (Zahn’s iteration constructs) are too complicated to be a part of a well-designed programming language, I can imagine an alternate history in which they would have a more prominent role.

But what I can’t imagine is a world without functions, yet that seems to be the world that this paper was written in. Did programming languages of the time really not have functions? (Certainly some of them did.) Or did they have functions but only allow a single exit from those functions, in which case they were irrelevant to the question at hand? That must have been the case, but I still find it hard to wrap my brain around. Compiler technology must have played a big role, too: the article is very concerned with optimization, and I doubt compilers were too good at inlining at the time.

So I felt like I was in bizarro world when I was reading the article. But it’s always fun reading visiting other worlds, and there were some hints of worlds that are dear to my heart in the article. At the end, he hints at object-oriented programming (or at least modules). Much more interestingly, earlier in the article he discusses the possibility of automated refactoring tools (without using the term “refactoring”, of course). But the use he proposes for those tools is completely different than the current uses of those tools: these days, we want to apply behavior-preserving transformations to our code in order to make it clearer, while Knuth wanted to start from clear code and apply behavior-preserving transformations to make the code less clear, but more optimized!

A decade later come the articles on literate programming. I knew that this was a way to embed source code and a description of what the code does in a same file, to enable you to present a nicely-typeset, richly-commented version of the code. I hadn’t realized quite how lengthy the comments were that Knuth proposed, however; I also hadn’t realized that literate programming divides code up into fragments that can be portions of functions, instead of entire functions.

At least functions play more of a role here than in the earlier paper. But they still don’t play enough of a role. Over and over again, I had to ask: why aren’t these fragments of code functions in their own right? There are probably a few answers to this. For one, I suspect that short functions still weren’t in the air much at the time. For another thing, I suspect that compiler and programming language support for inlining wasn’t very good, so it would have been an unacceptable performance hit. And a third thing is that the code fragments wouldn’t have always worked on their own because they referred to variables external to the code fragment: you need objects for that style of programming to come into its glory.

So it’s pretty amazing to see how Knuth comes towards modern best practices without modern programming languages and tools. And it’s embarrassing to realize that Knuth probably understood the need for Compose Method two decades ago better than I do now. He has also responded to my objects to the paper I discussed above: his literate programming discussions are in the context of Pascal, which apparently doesn’t allow multiple exits from a function, so Knuth provides a return macro to get the same effect (using goto).

[Side note: I never learned Pascal, for no particular reason. (I’m the right age to have learned it, but I was cool enough to jump straight to C. Or something.) I was aware of its pointless procedure/function distinction; I was not aware that it distinguished between calculating the return value for a function and actually returning from a function. Weird.]

Getting back to the “literate” part of literate programming, the length and quality of exposition of the comments in the code samples that are given is pretty stunning. Looking at this through XP glasses, I suspect that the comments are rather too long (too many artifacts not pulling their weight), but it would be interesting to try it out once or twice. (At the very least, I should get around to reading TeX: The Program: it’s only been sitting on my bookshelf for a decade and a half by now…) My first reaction was that lengthy comments could be good for presenting a finished, polished program to the outside world, but not so great for a live program that is constantly being modified, because the comments are an extra burden and could get out of date too easily: far better to spend your time on making the code itself clear. But, in “The Errors of TeX”, Knuth talks about how the comments made it much easier for him to perform some serious surgeries on the code, so I could easily be wrong there.

Some more cultural shifts that you see in the book: at the start of the book, he’s talking seriously about mathematical correctness proofs of programs. By the time he gets to “The Errors of TeX”, though, he’s using test input to exercise every aspect of his code. A big improvement; personally, I’d rather lean more on unit tests, but he’s working in a context where that isn’t so realistic. Also, data structures sure have changed over the years. When I think of, say, a binary tree, I think of a node as being represented by a chunk of memory with two pointers and a data value in it. But in this book, a binary tree is three global arrays (data, left, and right), with a node being an index into those arrays. (So many global variables!)

I’m definitely putting more of his books on my “to read” list.

strategy of the weak, revisited

July 24th, 2005

I’ve thought a bit more about the whole strategy of the weak thing. In some sense, actually, the DoD’s analysis is reasonably on-target. The analogy here is to think of the US as a bully: we’re the biggest, strongest kid in the school, and we have no compunctions about beating up people who are substantially less powerful than us (in various ways, not necessarily strictly militarily ones) if we can see short-term benefits in that.

And, of course, it’s natural for a bully to dismiss those less powerful people who won’t meet him on his own terms as “the weak”. There are various strategies that weaker kids can employ against bullies: banding together with other kids (international fora), telling the teacher (judicial processes), or feeding antifreeze to their dogs (terrorism). From this point of view, the DoD’s analysis actually looks pretty good, though the tone could use some work.

But there’s still a serious problem: it says “a strategy of the weak”, not “strategies of the weak”. And this linkage doesn’t work in the real world or in the analogy: the kid feeding antifreeze to the dog is not the kid telling the teacher, and if Osama bin Laden is complaining to the International Criminal Court, I haven’t heard about it. Pretending that these are all part of one strategy is, well, obscene.