[ Content | Sidebar ]

dbcdb: generated using Java

September 4th, 2005

This page looks the same as it did last week, but it’s being generated using Java. Whee. (And I hope it will look a little different by the end of the weekend.)

I didn’t spend too much time programming: my time was mostly spent managing and understanding infrastructure. There’s now an acceptance test which runs all the unit tests, a program to run all acceptance tests. I had a little bit of confusion with the multiple programs named ‘java’ and ‘javac’ on my system, but now the right one is being called.

My most pressing current infrastructure issue is that I really don’t understand how javac works with dependencies. If Foo.java uses a class Bar and if Bar.java uses a class Baz, then it looks like, if I modify Bar.java and tell javac to recompile Foo.java, it will also recompile Bar.java, but if I modify Baz.java, then compiling Foo.java doesn’t cause Baz.java to be recompiled. Or maybe I need to go one level deeper, or maybe I’m misunderstanding things completely. (Hmm: maybe javac knows that touching Baz can’t change Bar’s interface, so, just from the point of view of compiling Foo correctly (as opposed to having it and all the code it calls run correctly), it isn’t necessary to recompile Baz in that situation.)

For now, I’m using a plain old Makefile. Which isn’t right, either, because dependency info isn’t expressed in the Makefiles, but I’m hoping that make plus whatever understanding javac has of dependencies will be good enough. (As long as I don’t do parallel builds; this machine only has one core, though, so that’s acceptable.) Clearly I should learn about ant; I’ll make sure to do that as part of my next iteration.

Most of the reason why I had to write so little code is that, for better or for worse, I saved the non-GUI code from my last attempt at this. Which was a very small amount of code, but I think saving it was a good move: as it was, the changes required for this iteration took more than my two-hour goal. I still had to learn a little bit of Java: I’d never written a unit test in Java for code doing output to a file before. (Or if I had, I didn’t remember.) Working from C++, the theory is pretty obvious – there must by some abstract “output stream” class – but I didn’t know how the details worked.

So I looked at the documentation (I really like javadoc), and, sure enough, we’ve got java.io.OutputStream. Except that it talks about byte streams, and Java (quite sensibly, but unlike C++) makes a distinction between characters and bytes. Nothing leapt out at me from the javadoc, so I pulled out a book and found that Writer was the interface that I wanted. It’s interface was a little sparse; digging around, PrintWriter is more useful.

My first reaction is that I’m not sure that separation is a great idea – why not put all the functionality on Writer? But the answer is presumably that doing so would make Writer a non-interface (since you would want default implementations of the extra methods), which would limit its flexibility given the way Java does multiple inheritance. It also raises the question of whether the function I’m writing should accept a Writer or a PrintWriter; I decided on the former, since it makes the function a bit more generic. Which I think is good design, even if it apparently isn’t always the Java way.

So that’s the function: what about the test? I seem to recall reading that Java has a class called StringBuilder that you’re supposed to use to construct Strings piecemeal, but it dosen’t have a relation to any of these classes. Digging around, though, there’s something called StringWriter which is a sort of Writer encapsulation of StringBuilder. Why doesn’t StringBuilder just implement the Writer interface directly, though? Beats me, not that I’ve thought about it much.

And then, after all of that, the unit test failed in a very mysterious way. At first I assumed that I was misusing the classes in question, but it turns out that the problem was in my understanding of Java’s compilation model, as mentioned above. Once that was all fixed, the acceptance test passed the first time; yay!

I’ll probably end up deleting some more of the code that I wrote in December, but it’s served its role as a crutch. And the new code is wonderfully pregnant with possibilities: I look at it and I can see how I’ll want to move this method to this class, and I’ll probably want to extract a class out of these functions, though I can’t quite see the details of the latter. But right now, the code is quite clean enough for the current desired functionality; I’ll do those refactorings as appropriate for future iterations. (I’ll certainly move some methods in the next iteration: that’s the right time, while right now is too early.)

I’m also starting to completely rethink the priorities of some of my early iterations. I had planned to, in the next few weeks, be able to save the data in XML and read it from XML. The truth is, though, that doing so won’t actually make life any easier for me as a Customer: it’s very easy to write Java code to add more books to the database. Right now, the relevant code looks like this:

  Collection collection = new Collection();
  collection.createBook("Walter Benjamin", "The Arcades Project");
  return collection;

and I can just add a more createBook lines as necessary.

And XML was always intended as a crutch: eventually, I’ll want to generate web pages on the fly, and it’s a bit silly to do that by reading in the entire book collection in XML format. Instead, it’s better to look up whatever book the user requests from a database. So why not skip the XML step, and go straight to using a database?

I haven’t made up my mind completely; I would like an excuse to get my hands dirty with XML, after all. But the truth is that I don’t need it here, and that, in places where I might need it (controlling my iPod, generating an RSS feed), having used XML here won’t help me directly. Also, XML is pretty simple conceptually, so I don’t get the feeling that I really need to get my hands dirty with it to have a basic understanding of what’s going on.

Unfortunately, I don’t know anything about the databases, so I’ll have some reading to do before I can implement those stories involving a database. So for now I’ll stay away from the relevant stories, and concentrate on stories that improve the web pages that I generate (more fields, more kinds of data, whatever) while I read up on the subject.

podcasts

September 2nd, 2005

I’ve subscribed to my first podcasts: Agile Toolkit and The Sound of Vision. It really is nice that I can enter the URL for an RSS feed into iTunes and it will go and fetch new shows for me. And the iPod is definitely the right place for this sort of thing: if the podcasts were just music, maybe I’d be willing to listen to them on the computer, but those podcasts are spoken word, and the verbal part of my by brain is already quite busy enough when I’m using the computer.

I found both podcasts because they were interviewing bloggers that I respected. I’m not sure how long I’ll keep on listening to them – in the former case, I don’t get the impression that there’s a huge backlog of stuff to be added, and the latter one is more business-focused than I’m really interested in. But I’m interested in business issues enough to give it a try for a while, certainly.

One interesting thing about the Bob Martin interview in the Agile Toolkit: he says that there’s a minimum set of agile practices such that, once you have them, you’ll naturally start adopting the whole kit and caboodle. His candidates are:

  • Very short cycles.
  • An open office.
  • Test-driven development, both at the unit test and automated test levels.

Once you have those, he says, the rest will follow: TDD naturally leads to automated integration, an open office naturally leads to pairing, short cycles with automated tests naturally lead to the planning game. (Though later on he says that maybe the planning game is a fourth necessary seed.)

I really would like to experience an open office at some point. My group is still not completely sold on pairing all the time, though we’re moving in that direction, in a healthy fashion (more on that soon, I hope). I can see how upping the chatter level and having more colleagues in your field of view would help: if you can see somebody, it’s easy to ask them a question, and if people are talking about something that you have something useful to contribute to, then it’s easy to jump in, and both of those could naturally lead to pairing.

And, on a different note, in a noisy room it’s a lot harder to concentrate when working alone than when working with somebody else, so you might as well pair just to get any work done! In Peopleware, DeMarco and Lister talk about how cubicles are bad because people need a quiet place to get into a state of flow, so people should have offices. I tend to think that cubicles are indeed bad, but that there are two stable situations: an open work area or offices. In the former, you get the benefits of free information flow; in the latter, you get the benefits of quiet and privacy. But cubicles don’t satisfy on either count.

Not that our current layout is so bad: there are cubicles, but they only have walls on two sides, leading to more open feel and encouraging more communication. (And less claustrophobia.) But I would like to try an open work area at some point. (With small rooms on the fringes for times when you need privacy.)

(On an unrelated note: lice are not my favorite of animals.)

first story

August 28th, 2005

I’ve implemented my first story; the results can be seen at The Arcades Project. (Of course, if things go as planned, then the appearance of that page will drastically change over coming months!)

I was pretty good about writing an acceptance test first. Which meant that I got to install Apache locally on my laptop, and learn how to configure that; there were some twists and turns, but I got it done. I got to learn a bit more about subversion, too; I got some error messages from subversion that I still don’t quite understand, but it seems to be working acceptably now.

And I’ve added some more planning web pages. I still need to add a bunch of stories to the product backlog, though.

dbcdb

August 27th, 2005

I wish that I knew more about certain aspects of modern computer technology, espcially information-management aspects of technology. Examples of things that I wish I knew more about:

  • Java.
  • Ant.
  • Eclipse, especially its automated refactoring tools.
  • How to write a web page that doesn’t look like it was written a decade ago.
  • Web pages that accept input.
  • Web pages that are generated on the fly. By a trendy new language (i.e. not PHP); probably Ruby on Rails.
  • XML.
  • XSLT.
  • RSS/Atom.
  • Databases.
  • FitNesse.
  • Apache.
  • Web services.
  • Subversion.
  • GUI creation and design.
  • AJAX.

Also, there are things about my own information management that bother me. Examples:

  • I give information that I’m interested in to others (e.g. book ratings to Amazon) without keeping a copy of such information myself.
  • Book links in my blog are ugly, and point to an outside source.
  • I’m using Windows to get at my iPod, and it’s not as easy as I’d like for me to edit both its contents and the presentation of its contents.
  • It would be nice to keep (and make available) a list of books that I own or have recently read.

(Whenever the above says “book”, read “book / cd / video game / dvd.”) It would be nice to be able to fix some of these issues, while brushing up my agile development chops at the same time. (Especially in ways that they aren’t getting brushed up at work.)

Last winter break, I started on a project to address some of these issues. Which failed abortively, its only tangible output being a series of posts here on Java. Analyzing this with my finely honed agile management skills, what went wrong? Some things that I didn’t pay attention to:

  • Sustainable Pace. Realistically, I won’t have any time during the week to program on this. (Plan, maybe, but not program.) If I’m lucky, I’ll have a couple of hours at a time in the weekend to program. When I started, I was using a technology that I was unfamiliar with (Java GUI programming); I had a free week to learn about this, which probably would have been enough if I hadn’t spent a lot of that time playing GTA instead of programming. Which was the right decision, but it meant that I never got the product to a useable state that week, even in embryonic form, and it was really hard to make further progress once work started up again. So this time, I have to plan all of my work to fit into two-hour chunks that I can work on every week or two.
  • Frequent Releases. It would be even better if, at the end of each two-hour chunk, I could use the resulting functionality to do something new, that would be reflected on this web site. It would also be acceptable if there were changes that led to the same web site being produced in a different way: they wouldn’t be visible to the outside world, but they would have valuable for me in my Customer role (somebody who wants to produce a web page), not just me in my programmer role.

Putting these lessons together, I can’t get too hung up on the final technology that I’ll use. Which is probably correct for other reasons: eight months ago, I was thinking in terms of Java+XML as a final technology, while now Ruby+database seems more likely; once the implementation gets that far, my technological goals will probably have changed again! What I need to do, instead, is come up with an initial story that I can implement in two hours or less, that will result in a change that is visible on my web page, and that will hold up to potentially drastic technology changes under the surface.

If there’s a change on my web page, HTML will have to be generated; if I can carry it off in two hours, I will have to generate the web page by hand. (Or something morally equivalent, hard-coding strings into a program.) But it would be nice if, a year from now, the same link led to a web page that was generated on the fly, instead of a static web page. So let’s not have the link end in .html.

Fortunately, Apache has a tool (mod_rewrite) that allows you to manipulate how you generate HTML in response to a given URL. So there’s my first story: I’ll write a web page by hand, hide it somewhere on my web site, and learn enough about mod_rewrite so that the link http://www.bactrian.org/dbcdb/2 causes the contents of that web page to be generated. (The ‘2’ at the end instead of ‘1’ is because of the order in which I plan to implement the early stories.)

What shall I use to plan this? I’ll write up some stories, and put them somewhere, as well as a list of technical / planning tasks that don’t have direct Customer value that I’ll address as necessary to complete the stories effectively. I’ll ditch the concepts of iteration and releases, though: every story will be really short and lead to a release, so there’s no room for additional layers. Most of the Planning Game will go away. (I don’t think I’ll bother estimating my stories, either: I’ll just have a note as to whether or not I think it’s necessary to split them before implementing them.) No Pair Programming, obviously. Not much in the way of measurement artifacts, but I will make my list of completed stories available, with dates. And I’ll add a dbcdb category to this blog, so that I’ll feel embarrassed if I don’t work on this project enough to create posts that justify that category.

Hopefully I’ll implement the first story this weekend; if not, then next weekend. I’ll also try to get a few dozen initial stories written up over the next week or so. I’ve put up the list of motivations already; right now, it only contains information that’s in this post, but as I learn about more technologies (or stop caring about technologies), I’ll update it as appropriate.

stopped-up sink

August 25th, 2005

In our house, as in apartments that we’ve lived in, sinks periodically get clogged up. If the drain plug gizmo (what is the name for those things?) is removable, we try removing it and seeing if we can get stuff out of there, but in the last few places we’ve lived, they haven’t been. (There must be a way to remove the ones that don’t just pull out, but I don’t know what it is.)

So we use Drano. And then we use Drano a second time, because the first time never helps. If we’re lucky, the second time works.

But the second time didn’t work too well the last time we had to unplug the upstairs sink, so it got clogged again pretty quickly. Liesl got sick of this last night, and the plunger happened to be up there; so she used the plunger on the sink. Which worked great!

The question here is: why did it take us so long to think of trying this? How did we get this mental block where the plunger is the obvious thing to try for a stopped-up toilet, but we’d never thought of trying it for a stopped-up sink? Sigh. At least now we know.

pinkwaters

August 22nd, 2005

In grad school, Jordan introduced me to Daniel Pinkwater’s books. And they’re great! Well, many of them are great, and almost all of them are at least entertaining. (He’s written a lot of books.) For an introduction, I highly recommend 5 Novels; Jordan will be peeved if I don’t mention Lizard Music, and among his most recent work, I can not praise Bongo Larry highly enough.

But he writes enough books that I don’t feel compelled to go out and buy all of them. So, over the last year or two, I’ve been going through my local library’s collection of his books. (Of which they have most, but not all.) Eventually, though I ran out of his books. But right next to them were two books by Jill Pinkwater, his wife. (And illustrator of many of his books, though he actually illustrated his own early books.)

And they’re really good, too! Both Pinkwaters’ writings have quite a bit in common: very funny, in a world where things that we would consider surreal are quite commonplace, about people who would be considered social misfits in our world. In Jill Pinkwater’s books, some of their social misfit status leaks into the books: it’s quickly overcome in Buffalo Brenda (which I could imagine is a Daniel Pinkwater book), but Tails of the Bronx is a good deal more serious. Looking through Amazon, I see a few more (Cloud Horse, The Disappearance of Sister Perfect, Mr. Fred, a boring-looking cookbook). I don’t think my library has copies, but that’s what interlibrary loan is for…

paris arcades

August 21st, 2005

The Arcades Project has been sitting on my to-read shelf for a year or so. (I’ve finally started reading it, about which more later.) One thing that’s been bothering me since I heard about the book, though: I’ve been to Paris several times, and I don’t recall ever seeing an arcade there! Have they all disappeared, were there only a few to begin with, am I blind, or what? I have fond memories of arcades in Cleveland (though the terminology {a,be}mused me when I was younger), I bought a copy of The Wombles at a store in an arcade in London (why are those books out of print? Legions of loyal British readers, have the wombles passed out of the country’s imagination, were they ever popular?), but in Paris, nada.

It’s certainly possible that I’m forgetting having seen arcades in past trips to Paris; we did walk through one on this trip. Cour du Commerce St. Andre, the map suggests. Right near Le Procope, a centuries-old restaurant famous to us for a pasta recipe named after it (I should post that one of these months), where we had a quite nice meal, with quite good mozzarella (not as good at at La Ferme des Mathurins, but that’s hardly a pan) and a lovely muscat wine.

Anyways, fairly early on in the book there’s a quote on the matter saying

The most important of them are grouped in an area bounded by the Rue Croix-des-Petits-Champs to the south, the Rue de la Grange Bateliere to the north, the Boulevard de Sebastopol to the east, and the Rue Ventadour to the west.

So I pulled out my best-beloved map, and looked it up. After some amount of puzzlement (starting from the fact that Rue Croix-des-Petits-Champs runs north-south, so listing it as the southern boundary seems a bit quixotic), I found the area in question; and right there on the map, running through Rue de la Grange Bateliere, we see some streets bounded by dotted lines: arcades! Looking around, there are, in fact, several “streets” on the map that either are bounded by dotted lines or a sort of dashed lines; the legend says the former are tunnels while the latter are arches (“Passages sous voute”, which doesn’t mean anything to me); the next time I go back, I’ll have to figure out what the distinction is. Maybe the tunnels don’t have glass ceilings, and hence aren’t true arcades? (The one I did see in person is marked as an arch, and it did have a glass ceiling.)

Actually, it turns out that there’s more to be learned from that map, even though I’ve looked at it hundreds of times. A little further southeast, for example, I found some streets outlined in red, between the Forum des Halles and the Pompidou center; the legend confirms the obvious guess, that they are pedestrian streets. With another clump in the Quartier Latin near the river, near where we stayed this time and home to lots of indifferent restaurants and a lovely little artistic knick-knack/sculpture/toy store called Pays de Poche, at 73 Rue Galande. Are there any other clumps that I don’t know about? I didn’t see any after a cursory glimpse.

Returning to that clump of arcades on the map, my first reaction was that it’s in an area I’m not that familiar with, so no surprise that I wasn’t aware of Parisian arcades. Except that even that isn’t true: the time before last, we stayed in a hotel right near (maybe even on? I’m embarrassed to say I can’t remember) Rue de la Grange Bateliere, so we must have walked right past these arcades (/tunnels) dozens of times. Sigh.

And it must be true that some of the arcades have disappeared: apparently the Passage de l’Opera was destroyed to make way for Boulevard Haussmann, and just north of that are some department stores which may be located where arcades once were. (Or may not; maybe I’ll learn that later in the book.)

I should look and see what Christopher Alexander has to say on the subject. Arcades bring together a few nice ideas: pedestrian thoroughfares through buildings, that (like streets) have destinations (e.g. shops) on them, and that have glass ceilings. All of which are fine ideas. In the building where my father works (Kettering, at Oberlin College), there’s a pedestrian thoroughfare cutting right through it, but it really does serve just as a tunnel, with a normal ceiling and only a few doors on the sides. In Harvard Square, there’s a building I used to walk through quite frequently (Holyoke Center? I can’t quite remember what it’s called) that does have many useful doors (including shops) adjoining it. I don’t think it had a glass ceiling, though (which Google satellite maps seems to confirm), but it had a high enough ceiling that it gave much the same effect. For that matter, the Science Center also fits those criteria fairly well (and it does have a glass ceiling); it opens up in a way a street doesn’t, however, so there are fewer doors opening off of its main thoroughfare.

And Paris has adopted the “glass ceiling” idea to stunning effect the last few decades. (Side note: Google maps doesn’t cover France! How lame!) The Musee d’Orsay is one of my favorite buildings in the whole world. I can’t say that I’m all that thrilled by either the Louvre’s pyramids or the architecture of the area underneath it, but the glass ceiling does make it a wonderfully open area, and it’s a lot nicer than the courtyard above it. And the enclosed sculpture garden on the north side of the Louvre is my favorite part of the museum (at least architecturally speaking, though I enjoy it artistically speaking as well).

I should start noticing courtyards more, and figuring out what differentiates ones I like from ones I don’t like.

gran turismo 4

August 19th, 2005

Gran Turismo 4 is the first of that storied series that I’ve played. It’s almost the only driving game that I’ve played this generation (the exceptions being the forgettable F-Zero GX and a few rounds of Mario Kart with friends): I got pretty burned out on driving games last generation, and I needed some time off from them. I enjoyed driving games at the start of last generation: Extreme G was actually the first Nintendo 64 game I bought (admittedly, only because all the games I actually wanted were temporarily unavailable), about which I have no regrets, and Wave Race was quite nice, if not the crown jewel that it is frequently claimed to be. But it took me a little while to notice that IGN kept on giving 9 ratings to racing games that were at best good executions of a genre not known for innovations; by the time I figured that out, I’d lost my taste for driving games.

And GT4 was a good for my one driving game of this generation. I have no idea how they got graphics like that out of a PS2. The physics model seems better than in other driving games I’ve played: it’s the first game I’ve played that modeled drafting, for example. And I learned a lot more about driving from this game than from other games: in less realistic games, you just have to memorize the course and keep control at completely unrealistic speeds, and in other more realistic games I’ve played, I still succeeded by sticking to the insides of corners and braking enough to stop myself from skidding. But my approach to cornering (and in particular to using the whole width of the track) had to completely change when playing GT4, and honestly I still feel like I’ve only scratched the surface there. It helped that they had a nice set of graduated lessons in the form of driving tests to hone your skills.

Some parts of the game play didn’t work so well, though. The way a racing game traditionally progresses is as follows: you start off on easy tracks against bad computer opponents. After getting used to the game and the track, you win; you move on to harder-tracks and/or harder opponents. You frequently get some sort of better car as a side effect of winning; that, combined with your increased skills, mean that the new difficulties are enjoyable but surmountable.

This is cliched, perhaps, but not because it’s a bad idea: driving games by their nature give you a limited design space to play in, and there are only so many ways to get a good difficulty gradient in that design space. The GT series, however, is somewhat famous for ways in which it tries to enlarge the design space: it has lots and lots of cars and lots and lots of tracks (most real-world, some fictional). Which is quite impressive; it doesn’t push my particular buttons, but I acknowledge that it’s a significant accomplishment.

The thing is, though, it makes your progress through the game a good deal less linear. Your choices in cars and tracks start off somewhat restricted (by your budget and lack of driving licenses), but even at the beginning you have many choices, and the number only grows. There are many ways that a player can approach this; I decided to treat it mostly like a normal driving game, and start by playing the first designated beginners’ race.

This was fun: with the only car that I could afford and my initial incompetence and ignorance, I didn’t do well in the races at first. But as I learned the tracks and got better at driving, I placed higher, earned more money from my finishes, was able to upgrade my car (the game has a certain RPG-ish aspect), and with a combination of better skills and better car, was able to win that circuit.

So far, so good; what’s next? There were multiple next-level beginner’s courses (for the different engine positions that your car could have); I picked the one that matched my car’s engine. Like the previous circuit, it started out badly, but started to get better. The thing is, though, it didn’t get better very quickly; some of that could be blamed to my skills (though I don’t think I’m any worse at this sort of thing than your average video game player), but if half the field is pulling away from you on straightaways, ultimately you need to upgrade your car. And the fourth- and fifth- place money that I was earning wasn’t getting the job done fast enough.

So what was I supposed to do? I could have gone back to the easier circuit and earned more money from winning it again, but that would have been boring. So I looked around at other circuits; I found a “Japanese cars of the 90’s” one that I could enter, and surprisingly, it turned out that it was easier than the other circuit that I’d been going through, was in fact just at the right difficulty level for me.

So that was a good outcome; better if the game had made it easier for me to find an appropriate circuit to play, but at least I found one eventually. And with the prize money I got out of that, I was able to upgrade my car to an appropriate level to, with a bit more effort, win the previous circuit that I’d been trying.

The story doesn’t end there, however: when I won that Japanese circuit, I didn’t only get money, I got a car. You get all sorts of random cars when you win circuits; most of them are interesting for car collector geeks but useless for racing terms. This one, however, was very powerful. It had a different engine position than the car that I had been using, so I tried the second-tier beginner’s circuit that was appropriate to that engine type, and I found that I could blow away my opponents, even when driving very sloppily. Which is no fun, but what was I supposed to do? I suppose I could have tried to buy a worse car of that engine type, in an effort to get a reasonable challenge level, but that would have felt perverse; basically, that circuit was turned into a loss for me. And that wasn’t an isolated occurrence: that same car let me blow away several other races as well.

The story here wasn’t all bad: while I soon found an even more obscenely overpowered car that I could use to blow away more opponents, that mattered less in the higher circuits. What started to happen was that I would screw up on corners, allowing the other cars to gain a significant lead on me, which I would proceed to eliminate on the next straightaway, leaving us with a more or less level playing field. So to win the races, I had to memorize the courses, learn the appropriate speeds and locations to enter the corner, and execute correctly almost all the time. Classic good, challenging gameplay, in other words.

Ultimately, there’s a huge amount of depth to this game, and a lot of good gameplay to be found; I’m quite glad I bought it. The bad design of the player’s progression through the game is a serious flaw, however. Like several games I’ve played recently, I could have probably enjoyed playing this game for longer than I had, learning all of its intricacies, but given the breadth of video game choices that I have, I felt it was time to move on.

crayon shinchan

August 16th, 2005

I ran into a manga called Crayon Shinchan a few months ago; I used to be a bit embarrassed at how funny I found it, but I’ve given up on that, and just accepted that it makes me laugh out loud on a regular basis. (I mean that quite literally: I really do inadvertently laugh out loud a couple of times over the course of each volume.) It’s about a young boy (around six or so?), with a truly remarkable combination of innocence, bad behavior, and inappropriately adult remarks. The latter could be grating, but it works very well here.

It’s in an unusual format, at least based on my experience. I’m used to Japanese comics broken up into reasonably coherent episodes / stories that are tens of pages long, and to book-length Japanese comics. And I’m used to American comics in both of those formats, as well as newspaper-length individual strips. (I recently ran into a Japanese comic with more or less the latter format, Azumanga Daioh; is it common in Japan?) Crayon Shinchan, however, is divided up into three-page episodes. (Which are typically loosely connected into story arcs that are about 10-15 episodes long.) I’m not sure what to make of this, but it suits the mood of the comic; I don’t think it would hold up as well with longer stories, but three- or four-panel strips would be too short.

The variety of manga that’s available in the US these days is pretty impressive. I was at a Borders a week or two ago, and they actually had a rather better manga selection than the local comic book stores. I think part of the deal there is that the comic book stores skew fairly strongly towards male customers, while Borders doesn’t have that bias, and there are a lot of manga published in the US these days that are targeted towards teenage girls. (For that matter, I suspect that the Japan-oriented male youths of America don’t necessarily spend much time in comic book stores, either…) Some of which I read; I probably shouldn’t admit to liking Azumanga Daioh or Love Hina, but I do! (I guess it’s okay for me to admit to liking Banana Fish, though.) I’m sure that there’s still a vast amount of material that isn’t making it to the US, but three or five years ago I never would have dreamed at having access to the current range of material.

sudoku

August 12th, 2005

One of my coworkers pointed me at The Daily Sudoku. I’ve tried and enjoyed a few; I’m not sure how long I’ll keep it up, but I’m not stopping yet. So far I’ve only tried ones rated easy or medium (and, honestly, I can’t tell the difference between the two levels); apparently the hard rated ones are a significant change. I’ll be curious to see if they make me think in interesting ways, or if they just make me go through long tedious searches to make progress. (I tried to order the book from the site – it seems clearly worth two pounds, but I ran into some strange paypal glitch. Sigh. I thought this electronic payment stuff was supposed to work well by now?)

It reminds me of some other puzzle, but I can’t think of what. The common idea is this: say you have boxes where the choices for each box are 12, 12, 1234, 1234. Then you know that the first two boxes use up 1 and 2, even if you don’t know the order, so you can reduce the choices to 12, 12, 34, 34. Where else is this idea important?

copyright office

August 12th, 2005

Just what is the copyright office thinking? I can’t imagine they’re doing this out of bad intentions, but it’s pretty depressing that they’re apparently clueless about this sort of thing…

donkey kong jungle beat

August 9th, 2005

It’s been a while since I discussed a video game I’ve finished, hasn’t it? Not because I haven’t been finishing games, I’ve just been busy writing about other things. (The video games du jour are Shenmue II, on my new Xbox (about which more later), when Miranda is around, and the stunning Resident Evil 4, when she isn’t.)

Oldest on the stack of finished games is Donkey Kong Jungle Beat. I was pretty excited when Donkey Konga, a music game with drums, was announced for US release: another sign that publishers are less likely to prejudge US customers’ tastes in Japanese video games. But when the game itself came out, I ended up not buying it: the song list was full of pop songs that I didn’t particularly like. (I’d much prefer video games with good music that I’ve never heard before; though why did I claim in the linked-to post that DDR is a Namco game? It’s by Konami.) I wouldn’t mind drumming to the Zelda theme, but that’s not enough to get me to buy the game.

Actually, what’s really a sign of the penetration of Japanese games into the US is that there are two drumming video games available here, the other being Namco’s Taiko Drum Master. Which has the Katamari Damacy theme in it, but even that isn’t enough to get me to buy the game by itself. (Even if Miranda and I still spontaneously sing it occasionally.)

But Nintendo had another use for their drum controller: a side-scrolling platformer called Donkey Kong Jungle Beat. Which had been getting positive mentions ever since it appeared at an E3; being a sucker for weird game ideas, I wanted to give it a try.

Not very good, I’m afraid. Ultimately, I just don’t have enough nostalgic fondness for 2d platformers to normally want to play them in preference to today’s much richer games: they’re not the sort of timeless simple pleasure that, say, a good puzzle game (Tetris) is. And while there are good games with very simple controls (e.g. Super Monkey Balls), the controls in DKJB felt to me like a gimmick. There are only three or four things you can do at any point, so when you get to a weird creature, you just hit the controls at random a little, find the magic effect of clapping (or whatever), and continue. If that doesn’t quite work, you have to manipulate the drums to jump in the air at the right location, and then clap. Whoopee.

And, to add injury to insult, my hands really hurt after playing it. Tip for all of you married people out there: take off your wedding ring before trying this game. But even after that, I suspect your hands need some toughening up. Miranda liked it enough that I went through the first eight twelve levels (in all of 4 hours or so: not a hard game unless you want to get as many bananas as possible), but I didn’t feel like replaying earlier levels to get better scores on them to unlock the last four levels.

A decade and a half ago, the gameplay would have been fine, and I would have had limited enough options (for reasons of game availability and finances) that I probably would have been happy to replay the levels over and over to earn the top medals on all of them. I’m happy that I can set my standards higher now.

code reviews, tasks

August 6th, 2005

I was unhappy with the result of our pair programming meeting for various reasons: we were all unhappy with how things were going, I was pretty sure that we were doing something wrong, but I didn’t know what it was. We’d adopted short-term measures to ease some of the pains, but I didn’t see them as leading to a coherent solution that I’d be happy with.

After thinking about it for a day, I decided that our changes were leading in a direction that I certainly wasn’t happy with: while I’m still not sure of the merits and demerits of pairing, I am sure that it’s good for us to spend more time focusing on the quality of our code, and to spend more time in general talking about code. If we’re going to pull back on pairing, we should still try not to give up on that goal: so I instituted a policy that all non-trivial checkins would require a code review. (If the code was entirely developed while pairing, that counts as the code review, of course.) Code reviews are probably not quite as good as pairing for quality control, but they’re a lot better than nothing: I know that, when I was working on GDB, I got a lot of useful feedback from others’ code reviews, for example.

I felt better after that: people were talking more, the checkins were a bit cleaner. Not a lot cleaner, but that will come: editing, like any other skill, improves with practice.

A week or so later, we ran into another problem: the assignment that one of my team members was working on that week wasn’t done, it wasn’t clear to me when it would be done, and I wasn’t at all confident that I’d like the results when I saw it. (Of course, my lack of confidence may have been largely caused by my lack of information: maybe it was great code, I just had no easy way of telling.)

This wasn’t an isolated instance: when we estimated a story as taking a full week to accomplish, it would turn out to take more than a week most of the time. We were fooling ourselves with our estimates, and we were skimping on design: it’s one thing to be against “Big Design Up Front”, but that doesn’t mean that some amount of design isn’t appropriate.

And now a bunch of things clicked. I’d been aware for several months that we weren’t really planning in the XP way: the relevant issue here is that we were working exclusively in terms of “stories” (basically, features with user value that can be implemented in a week or less), but not breaking them down into “tasks” (individual technical steps necessary to implement the features, each of which can be accomplished in a single pairing session). When I first realized that we were doing the planning wrong, it wasn’t clear to me that this difference was a big deal, but all of a sudden introducing tasks seemed to solve several problems that we were having:

  • Breaking a long story into tasks should make it easier to accurately estimate the story’s duration, with a bit of practice: a six task story will probably take longer than a four task story, but that wouldn’t have been so obvious before breaking it up into tasks.
  • The process of breaking a story into tasks gives us a chance to talk about the story together and do an appropriate amount of up-front design.
  • If a task takes longer than expected (in particular, longer than a day), that’s an immediate warning sign that something unexpected has turned up. We can deal with the problem right then, by calling an impromptu design session and breaking up the task into smaller tasks as appropriate.
  • In the unhappy event that a story still takes longer than a week to accomplish, at least I’ll have a much better idea of its current status, because I’ll know what tasks have been accomplished and what tasks haven’t been accomplished.
  • It seems plausible that it will significantly improve our mood towards pairing: it’s not much fun showing up in the middle of somebody else’s project, working on it for a little while without really knowing what’s going on, and then leaving while that person continues. It’s a lot better if you come in at the beginning of a coherent project, work on it together for a few hours, and finish it.

We’ve been doing this for a grand total of a week now; it’s probably largely my imagination, but I’m a lot happier with how things are going. We actually had a pretty bad week in terms of completing stories (we were still underestimating how long long stories were taking), but the one problematic story was in much better shape: we’d finished 5 of the 6 tasks that we’d broken that story into, we knew the last task was turning out to be more complicated than we expected, so we found a coherent way to split it into two tasks.

In our weekly meeting on Friday, most of the stories were fairly well-defined, but one of them was pretty amorphous. So we spent about 20 minutes breaking it up into talks, talking about pros and cons, with lots of people chipping in about what they remembered about the different pieces of affected code. At the end, there was general agreement that the story was significantly less scary than it had seemed before we started talking about it.

And maybe it’s my imagination, but I think I’ve been enjoying pairing more. Yesterday, for example, I had a very pleasant time writing a really solid class. I particularly appreciated my partner’s winces whenever I chose a bad name for a variable: joke all you want, but little things like that are important. (Incidentally, we also tried out programming by intention some more, with good results.)

Not everything is perfect yet, but I’m much more optimistic than I was. We’re still underestimating large stories, but hopefully tasks will give us a better handle on that. Significant issues still remain with pairing: in particular, our differences in familiarity with different parts of the code and in programming background make pairing hard, but I can deal with that, and those differences will lessen over time. As long as we have a plausible path for improvement there, I’m happy.

On the one hand, I feel a bit silly that we didn’t start using tasks a lot earlier: I should have been paying more attention to what the XP books were saying, because the authors of those books have a lot of useful experience. (Incidentally, it’s fascinating reading the XP mailing list.) And I’ll certainly keep on rereading various XP books to find more mismatches between our practice and their descriptions that might shed light on problems we’re having. On the other hand, making mistakes is a classic way to learn, and for good reason: I have a much more active grasp of this issue than I would have if we’d done things right from the start.

My next management issue, aside from monitoring this one: reading about Scrum, to see if we can use that as a blanket methodology for the entire software team (i.e. my group, the other two groups parallel to it, and my manager’s group). It’s compatible with but less specific than XP, and explicitly addresses issues involving multiple groups; with luck, it will be something we can all get behind. But I have some reading to do to learn more about it, to see if I think it is a good match for current and potential problems that the larger group has.

pair programming update

August 5th, 2005

About three months ago, my team started seriously experimenting with pair programming. It’s been more than long enough since then for us to take stock, so we had a meeting three or so weeks ago to talk about our experiences.

The results were mixed, and really hard for me to get a grip on. Some good things:

  • Pairing did help the quality of our code.
  • More people know more about more of the system.
  • The daily standup meetings that we started doing at the same time as we started pairing helped me (as a manager) keep much better track of what was going on midweek.
  • Sometimes, pairing with the right person could save a lot of time debugging an annoying problem.
  • A pair seemed more willing to ask for help quickly than a programmer working alone.

I might have forgotten a few (I’m at home, my notes are at work), but that’s the basic idea. The last one, in particular, interested me: I wasn’t expecting it, though in retrospect it makes sense. After all, the macho programmer ethos means that a single lone programmer is loath to admit that he can’t solve a problem by himself; if two programmers both can’t figure something out quickly, though, they’re much more likely to figure out that they need outside help. (When appropriate, of course, especially when there’s specific knowledge that they’re missing.)

The bad side (again, I might have forgotten a few):

  • It wasn’t at all clear that we were more productive pairing than when working alone.
  • We didn’t look forward to pairing.

The first of those isn’t necessarily a show stopper: we agreed that we were willing to make a tradeoff of less code written of higher quality with more knowledge transferred, and that we weren’t in a situation where we needed to crank out as much code as possible. So it seemed plausible that pairing was a long term productivity gain for us; still, somewhat disconcerting, since the literature suggests that it should be clearer that pairing is improving our productivity.

The second, though, is a real problem: I got the feeling that we (I, certainly) wanted to enjoy pairing, but something really wasn’t working right. And I couldn’t figure out what it was.

Our conclusion was that we saw enough good things that we wanted to keep on trying. But we needed to leave more breathing space, at the very least. We decided to start by drilling down on our feelings of where pairing was more productive and where it was less productive, and then during our standup meetings, we’d use those criteria to figure out who would pair at all that day, not assuming that everybody would always pair.

(It’s getting late, and this is as good a stopping point as any; I’ll post a followup bringing the story up to date in a day or two.)

dan johnson, shanghai crab

August 1st, 2005

I was kind of bummed when Erubiel Durazo got hurt, and Scott Hatteberg’s performance has certainly been nothing to write home about this season. (I still have no idea why he’s gotten the contracts he has from Billy Beane.) But Dan Johnson’s performance has been a pleasant surprise: I’d literally never heard of him, but after 174 plate appearances he’s slugging .500. Who knows how long he’ll keep that up, but I guess he isn’t a complete flash in the pan: looking at his entry in the 2005 Baseball Prospectus, I see “Johnson is ready to step in and take Hatteberg’s job”, and they certainly got that right.

We’re watching the shanghai crab episode of Iron Chef right now. I’m used to seeing live seafood there (driving nails through the heads of pike eels thrashing around on the cutting board), though seeing the poor crabs put live into a hot wok was a bit much. A first for me, though, was a crab with its shell off, in the process of being disemboweled, and you could still see its heart beating…

livres

July 31st, 2005

I did some book shopping in Paris. A bit silly, in these days of www.amazon.fr, but old habits die hard. And FNAC is still pretty cool, though not quite as impressive to me now as it was the first time I set foot in it.

I bought most of Bruno Latour‘s books that hadn’t been translated into English, some comic books (standards: Tintin and Asterix), a few Barbapapa books for Miranda, and SGA1. I felt a little silly about the comic books, not about the ones I did buy (they are both deservedly classic series) but because I didn’t look for anything else: France is one of the great comic book-producing nations, and I walked by several good-looking stores, but I just wasn’t in a very inquisitive mood, I guess.

The new printed version of SGA1 turned out to be the same version that’s available online. Still, it’s nice to have a copy that’s easy to hold in your hand. Who knows when I’ll get around to reading it, but I suspect I will at some point over the next year or two (more likely it than some of the more experimental Bruno Latour books); I think/hope it should be at a level that I can read it without excessive effort, and it’s an important part of mathematical history. I don’t want to lose contact with math entirely, after all, and reading classic works seems like a good way to keep my brain active.

The whole Grothendieck reprint story has to be seen as a victory for the forces of good. I spent some time this weekend reading Free Culture, by Lawrence Lessig, and now I’m really depressed, but it’s great to see some people saying that the current situation is ridiculous and snubbing some of its more odious aspects.

The technical bookstore that I patronized seven years ago seems to have disappeared, more’s the pity. But it remains the case that general-purpose bookstores in Europe have much better math sections than their counterparts in the US. I’m not sure why that is, but I’m not complaining. It was fun browsing; a lot of familiar titles, and some new titles on familiar subjects. Nothing new and exciting that leapt out at me; in a decade or two, maybe I’ll go back and catch up on some of the advances in the field. Probably not, to be honest, but who knows what the future would bring; I’ve enjoyed spending the last two or three years catching up with (some of) the advances in computer science that I missed over the previous seven or eight years, after all.

programming by intention

July 29th, 2005

Ever since I read Refactoring to Patterns, I’ve been thinking that I should use Compose Method more. (I should really reread Smalltalk Best Practice Patterns to see what other low-level patterns I’ve missed.) But I’m too timid to perform quite that drastic surgery to the thicket of code that I’m working on.

I just finished Extreme Programming Installed, though, and the authors talk about an interesting way to develop your code so that the methods are nicely composed. It’s called “programming by intention”, and works as follows: whenever you sit down to implement a method, you simply write down a method call explaining what you want the method to do first, another one explaining what you want it to do second, etc., without worrying yet about whether there are, in fact, methods with those names. If there aren’t, you then go and implement those methods. (Again programming by intention, though it should stop after two or three levels.)

I tried this yesterday, and it was great! I wanted to write a method that parsed a series of data structures; I had some ideas about how the low-level details would work, but I decided to just put those out of my mind and program by intention. We were parsing a sequence of data structures for as long as data remained, and the conditions for when data remains were slightly nontrivial in this context, so I started by typing (more or less, details are changed):

  while ( dataRemains() ) {

Each data structure starts with a type field, and a length field, both expressed as a multibyte value that I hadn’t yet had to parse. So:

    int type = nextMultibyteValue();
    int length = nextMultibyteValue();

Next, we start printing out the data. We’d like to output a string representation of the type codes, so:

    printTypeCode( length );

After this, we needed to output the data as a sequence of bytes, with its length given by the number we just read; I already had code to do that, so I just called that code:

    printNextBytes( length );
  }

Once I’d done that, I implemented dataRemains, nextMultibyteValue, and printTypeCode; each of them was easy to implement now that I wasn’t thinking about anything else. (And I knew I wasn’t wasting my time because I’d already shown that, once those were implemented, I’d have exactly the functionality I needed.) And the resulting methods looked great (though Compose Method suggests that I should have gone further and extracted the entire body of the loop into a method, which probably wouldn’t have been a bad idea).

This dovetails very well with test-driven development. One important benefit of TDD is that it focuses your mind on doing one thing at a time: either you’re focused on writing a test to express your next goal, or you’re focused on getting the test to pass, or you’re focused on cleaning up your code. Programming by intention, in turn, helps narrow your focus during the second of those steps: while you’re getting the test to pass, concentrate on what you want your implementation to do on a conceptual level, then drill down and repeat.

Side note: in a recent post on the XP mailing list, Kent Beck talks about how top down / bottom up isn’t a very useful dichotomy for him. Which I agree with to some extent, but programming by intention suggests that a particular form of top down programming is very useful when programming on a small scale. I’ll have to think about the extent to which this is the case at other levels of XP: is top-down the way to go when you’re trying to get an acceptance test to pass, for example? (Probably on the design level, but not on the implementation level, because you’d go far too long without working code.)

upgrade finished

July 28th, 2005

I spent a little more time playing around with doing the upgrade piecemeal; it turned out that, while there were some pleasant groups of packages that came together in a clump of 10-20, most packages either were happy to be upgraded individually or were part of a huge clump that required a few hundred packages to be upgraded simultaneously. (Upgrade ftp, then libreadline has to be upgraded, then all other CLI programs have to be upgraded, and they pull in all sorts of random libraries to upgrade, etc.) And once that happens, you might as well upgrade everything. So I did; worked fine. (I’m still planning to go and look through my list of installed packages just to see what I should consider removing, though.)

I’m a little annoyed at their “Fedora extras” thing. At first, I was happy because it meant that I could get galeon from them instead of having to find it at another repository. (Good thing, too, because the repository I had been using for that doesn’t seem to have an FC4 version available.) But it turns out that they don’t bother to keep the extras repository in sync with their other repositories; they’ve done an upgrade of mozilla, since their last galeon upgrade, so right now I can’t install galeon at all because there’s no easy way for me to get the old mozilla version instead of the new one. Sigh. Still, they’ll probably work out the kinks over the next few months.

upgrading to fc4

July 26th, 2005

As threatened after my last OS upgrade, I’m upgrading to FC4 relatively soon after its release. This time, the release notes are very clear about the easiest way to upgrade: install a single RPM (which basically tells yum to look for FC4 packages instead of FC3 packages), and then do yum upgrade.

I’m actually not quite doing that: since I’m not sure how long it will take to download all that stuff, I’m trying to do it piecemeal. Which is sort of a fun game: sometimes, if I want to upgrade a single package, it just upgrades that package plus maybe a handful of others, but sometimes it indirectly pulls in hundreds of other packages.

Unfortunately, there’s some sort of version problem with galeon, my web browser of choice. (It’s included in their ‘extras’; maybe they don’t rebuild those as frequently as they should?) So I’m using firefox for now, which is fine. And there’s some sort of dependency failure with certain java-related packages; I’m not sure what the deal is with that, but for now I don’t mind just removing the packages in question.

I expect that I’ll be doing this over the course of the next week or so; gives me something to do, I guess.

(more baseball)

July 25th, 2005

Despite what I said a week and a half ago, maybe the A’s are going to make the playoffs; they’re tied for the lead (and about to take the lead) in the wild card, after all. Even the AL West title seems not out of reach right now. They will, of course, cool off eventually, but they’ve shown over the last few years that they’re more than capable of ridiculous second-half performance. (Is that luck, or is that a skill that some teams or players have? Any studies one way or another?)

Too bad that the Indians are going in the opposite direction, and are so much further behind their division leader…