[ Content | Sidebar ]

miscellany: fedora, work weeks, ddr, pujols

April 2nd, 2005

I was hoping that I’d be using Fedora Core 3 the next time I wrote an entry here, but ’twas not to be: I can’t get the damn CD’s written. I’ve eliminated all variables, using two different CD writers, two different CD readers, two different sources of blank CDs, two different downloads of the images (both md5 summed). Of course, it’s the same images both times, but I doubt that the Fedora folks are distributing images that are incapable of working. I’m not sure what my hypothesis is now: I guess my laptop’s CDRW drive must be having problems, but that can’t be the only issue, otherwise everything would have worked fine at work.

Sigh. Why do I have to burn them to CD’s in the first place? I guess now I’ll explore avoiding that altogether. Also, I forgot to mention another annoyance yesterday: even if this does work, I’ll have spent all this effort effectively downgrading many/most of the packages on my computer to their status at the time of FC3’s release, and then I’ll have to spend a few hours upgrading the computer again.

Counterpunch has been talking about work weeks recently; one of the least emphasized practices of eXtreme Programming is the forty-hour work week. A good practice, which I try to stick to in my team. I do wonder, though, how much the rule came to be because it’s a good idea for humane reasons, as opposed to a good idea for production reasons. Certainly the former is by far the most important reason for me: I love programming, I thoroughly enjoy my job, but I’m not about to let it start squeezing out my family life, or for that matter my own personal non-work intellectual interests. And Kent Beck does admit that those reasons are important to him, too. But XP does do a pretty good job of making a case for it in terms of production reasons: they work very hard to have people write the highest-quality code they can all the time, and it’s not too much of a stretch to imagine that you can’t keep that up if you regularly work evenings or weekends. (And, of course, they have a C3 project anecdote to back that up.)

And I’m not sure the “treat people humane” argument is such a bad one, even in productivity terms. It’s expensive to replace people, in terms of cost, time, lost knowledge, and lost morale. I tend to think that those are the sorts of costs that are underestimated, and that a company that treated them seriously could get a real competitive advantage over other companies. (Also, while we’re on the subject, what’s so magic about a 40-hour work week? How about a 35, or 32, or 30-hour work week? The 40-hour work week was one of the great accomplishments of the 20th century; we should be ashamed that we’ve been going up over the last half-century instead of gonig down.)

We went to an open house at Miranda’s daycare today. In one of the rooms for bigger kids, they have a big TV with a Playstation and a couple of Dance Dance Revolution mats, so I watched some kids play that for a while. (One of them was pretty good, too, certainly better than I am.) I haven’t dragged out my DDR pad since we moved into this house, and I’m not sure why: it’s a fun game, and it’s even decent exercise, so I could have played it on days when I was supposed to jog but it was too wet for me to be thrilled about going outside. Actually, I expect that I’ll hit a bit of a void in video games soon: there aren’t a lot of games coming out soon that I’m excited in, and some of the recent releases that I am interested in are ones that I don’t want to play while Miranda is watching, which drastically cuts down the time when I can play them. (Especially since I’m still playing through Grand Theft Auto.) At one point I expected to make up this void by finally getting around to buying an Xbox, but now that the rumors are that the Xbox 2 will be backwards compatible, I’m holding off on buying that until I know for sure one way or the other.

Albert Pujols hasn’t struck out all spring. (.455 batting average, slugging over .900). Apparently this isn’t all that rare, actually: Eric Young did it just two years ago. Still, he’s really good. Nice to see Jordan last week; too bad that the A’s and O’s are opening against each other…

os upgrade and incremental development

April 1st, 2005

Last year, I upgraded this computer from Red Hat 8 to Fedora Core 2. It was a bigger OS jump than I would have perhaps liked (skipping two OS versions), but now I’ve stuck with FC2 for a while, even though FC3 has been out for several months. At first, I was planning to skip FC3 and jump to FC4 soon after it comes out, but now I’m planning to upgrade to FC3 soon (this weekend, hopefully).

The hope here is that OS upgrades are sufficiently painless that the benefits of taking small leaps, always using a supported OS (I’m not sure FC2 will be supported once FC4 comes out), will outweigh the fear and glitches that come from doing the upgrade. Now that I subscribe to fedora-announce-list, I realize that lots of FC2 package upgrades these days come in parallel with FC3 package upgrades. Which means that

  • The difference between FC2 and FC3 isn’t all that great
  • To the extent that the upgrades don’t come in parallel, the longer I wait the bigger the chance that switching OS’s (whenever I get around to doing it) will cause serious disruption.

On a related note, I’m becoming more and more of a fan of incremental software development every month. Why can’t we develop software in such a way that we always have a working version? And the truth is, as the eXtreme Programming people have taught us, we can develop software that way, and it’s really useful to do so. These days, at work, I get nervous if I have a modified source tree that doesn’t get checked in for two days straight (and if it goes longer than that, I see it as a sign that the code should be thrown away, instead of digging a deeper hole).

So why not apply the same philosophy to OS development, to OS upgrades? The version of FC2 that I’m running now is pretty different from the version that I first installed; every day or so, I do ‘yum update’, and a few packages get upgraded. And many of the updated FC2 packages are the same as the FC3 packages; I’m not sure how different the version of FC2 that I’m currently running is from a current FC3 installation, but I don’t think they’re all that different. So why not go whole hog and release all FC3 packages on FC2, eliminating the difference between the two OS’s?

The big issue is, of course, incompatible changes. If a key library changes its major version number, do you have to upgrade all the packages that depend on that? Or do you leave the old version in place, for old packages to link against? If the latter, when does the old version go away? And libraries are the easy case, because they have a built-in mechanism to have multiple incompatible versions installed simultaneously – if gcc suddenly changes from 3.3 to 3.4 and all your C++ code stops compiling, you might be a bit annoyed. (Or you might be grateful that your nonportable C++ code is being flushed out, of course.) I don’t think this is rocket science, though; I’d be unhappy if, say, 5 years from now I still have to do an OS upgrade in a big hunk. (Then again, it might not be with Fedora: it’s run by Red Hat, which has reasons to have non-incremental upgrades in their enterprise distributions.)

So far, actually, I’m a bit stymied by just getting the CD’s burned: I keep on trying to burn CD’s, and they keep on failing the mediacheck stage. Sigh. Which is a perfect example of non-incremental upgrades getting in my way for stupid reasons. The release notes make me wonder if it’s possible to download the iso’s, mount them from the hard drive (instead of burning them first), and upgrading directly from that somehow. But they don’t give any instructions for doing all of that, and I don’t feel like figuring it out by myself (and I’m not at all sure that it’s possible).

miranda cooking

March 30th, 2005

For the last month or so, Miranda’s been really into helping out with cooking dinner. I’m not quite sure what triggered it; part of it, I suspect, is that with her current bed time, she doesn’t get to spend much time with us in the evenings, and the best way to maximize that time is for her to help us with dinner, since we certainly can’t play with her while cooking! Also, the week before she started helping so much, the cooking segment at school involved her using sharp knives; this may have given her more of a sense of power and accomplishment. (We don’t let her use sharp knives at home, for what that’s worth.) (Once of the many nice things about PACT is that kids get to do stuff like cooking – basically, whatever parents are interested in teaching, kids get to do!)

Actually, though, she’s been cooking for a while, and doing it much more creatively than Liesl and I ever do. She designs her own desserts, and they can be quite distinctive. The basic model is ice cream, chocolate sauce, marshmallows, and a couple of colors of sprinkles, but she quite frequently substitutes in other ingredients (chocolate bars, cookies, fruit, whatever else she thinks of). Not always the most coherent of dishes, but they’re fun to eat (and fun to help her with), and I’m really impressed with her desire to design them.

stan freberg

March 25th, 2005

I was just listening to a collection of Stan Freberg singles. Satirical musical comedy from the 1950’s; pretty good stuff, though it is, of course, somewhat dated.

Reading through the booklet that came with the CD, though, it’s amazing how much pop culture we’ve lost from only a half-century ago. A few of us have heard of Stan Freberg and like him; these records were real hits at the time, however, with (for example) St. George And The Dragonet / Little Blue Riding Hood being apparently the fastest-rising single in the history of the record business up to that time. (That time being 1953; on the other hand, it only spent 4 weeks at #1, so it was perhaps a bit of a flash in the pan even at the time.)

But he did more than put out a few comedy singles. (Apparently quite a lot, actually: there are 21 on the CD, but it claims that there are many more where they came from.) For example, it says that “He and Daws [Butler] wrote and performed as principal actor-puppeteers for a live half-hour show [“Time for Beany”] every weekday for the next five years. … The show was popular with all age groups, went on to win three Emmys and a Peabody”. This is a show that apparently produced hundreds (over a thousand?) episodes, and was well received, but I’ve never heard of it, and there’s almost no media available for it. (Are most of the shows still in existence, or have the tapes been lost?) I doubt the show has aged very well, but that’s still a real shame.

At least modern media is digital, so it’s much more likely that there are copies squirreled around somewhere. If only copyright law could get changed so that people could, say, legally get their hands on old video games that are no longer for sale. One of these decades…

school closure: one more year

March 23rd, 2005

The board finally voted last night. Actually, they voted on two things: they changed their vote of a month ago, and agreed to not close any school this year. And they voted on which school they would close next year: they’ll close Slater (my daughter’s school), PACT will move to Castro, but the rest of Castro will stay as-is (instead of moving the dual immersion program away from Castro or closing the neighborhood strand). They’ll try to get a third magnet program at Castro eventually.

All in all, I think the vote went about as well as I could imagine. I’m obviously quite happy that they’re not closing any school last year. I’m sad that Slater is targeted for closure a year for now; but I can’t honestly say that the proposal they approved isn’t the best one for the district as a whole. In particular, it’s the only proposal that actually had a positive vision for Castro, that didn’t treat Castro as a problem to be swept under the carpet somehow.

I hope that something will happen over the next year to remove the need to close any school next year, either, though I can’t say that I’m optimistic. So I’ll have to do what I can to make PACT’s probable move to Castro a smooth one. But first, a break; it’s been a busy last couple of months.

(A busy one for many people: I have been extraordinarily impressed with the way the Slater community behaved throughout this process. A lot of people worked very hard to get us this result; my heartfelt thanks to all of them.)

blogosphere

March 18th, 2005

Even though I’ve been blogging for half a year now, I get the feeling that I’m not doing it “right”, or at least I’m not doing it the way normal bloggers do. Whenever I read other people’s blogs, they’re usually taking part in actual conversations: I dipped into several blogs a couple of weeks ago, for example, and learned that apparently all the hip bloggers are supposed to have an opinion about Google’s Active Toolbar, and were linking to each other’s arguments, whereas I’d never heard of the thing. Oops.

I’m being flip, of course, but I really do like following the links in other people’s blogs; it increases the chances that I’ll run into something both interesting and unexpected. It reminds me of the early days of the web, when the web sites that were common cultural references were much more individual, idiosyncratic efforts (I still read Dr. Fun regularly…), there was always something new and neat around the corner, but if you kept on tracing through new stuff, you’d find references back to familiar ground. Kind of like Usenet: a huge amount of stuff there, with lots of subcultures, but you’d quickly recognize the regulars on the groups you read, and you’d occasionally see those same regulars in other, unrelated groups.

Still, my lack of links is largely just the way I am: while I do spend lots of time thinking about others’ works, those works aren’t particularly likely to be on the internet; the Amazon links that I provide are just a pathetic pretense of an attempt at electronic reference. Better to chose my topics based on what I actually spend my time thinking about, instead of what happens to be on the internet; there are lots of other people who do the latter much much better than I could.

But part of the reasons were technological. When I read other people’s blogs, I often found them interesting; but the irregularity of their updates meant that I didn’t really want to add them to my list of links that I click on daily. (I actually did most of my blog reading at work, largely because Jonathan Schwartz, Sun’s president, has a good one which freqently links to interesting stuff.) This problem, however, has a well-known solution: RSS. I’d put off reading RSS feeds because Galeon, my browser of choice, doesn’t understand them, and I didn’t want to switch browsers. And I wasn’t sure that RSS reading fit most naturally into my browser: better, perhaps, to read RSS feeds in my mail/news reader, Gnus. And, while I’d heard about people using Gnus to read RSS, I couldn’t find it in the manual (as packaged with XEmacs, or maybe it’s Fedora Core’s fault).

A month or so ago, though, I got fed up with this situation, and did some poking around. It turned out that XEmacs was distributing a slightly out-of-date manual; when I looked at the version of the manual available online, it made it clear that the version of Gnus I was using really did support RSS. But when I followed the instructions in the manual, it completely failed to work! Fortunately, gnu.emacs.gnus came to the rescue, and a few GR’s later in my *Group* buffer, I’m subscribed to RSS feeds, and happily reading blogs regularly.

Not a lot of blogs, though. (As you can see: for now, I’m putting the ones I subscribe to on the links list on the right side of this blog.) I hear about RSS aggregators, but I haven’t yet felt a need for that. (Good thing, too, because I don’t know how to do that in Gnus, though it’s probably possible.) It will probably grow, though, because there’s actually another weird feedback loop going on here: when I’m in an authorial mood, I log on at home more frequently than I used to, which means that I would quickly work through my old, familiar list of regular links (and my regular list of newsgroups), which meant that I’m looking for more stuff to read online. (Then again, I might put a damper on that feedback loop by, say, spending less time on the computer at home, or spending more of my computer time at home programming.)

school closure: second castro meeting

March 17th, 2005

Another meeting at Castro last night. Not too much excitement in the community comments. I did admire (?) the chutzpah of a certain group of parents in the dual immersion program who talked about how horrible it was to close a school in that community, and then floated a plan which would turn the school into a collection of magnet programs, closing down the neighborhood program that kids in the community actually attend. (Not that they couldn’t attend the magnet programs, they just wouldn’t get priority.) I liked the guy who talked about how nobody is talking about closing Huff, despite its being as segregated as Castro, even though leaving Huff open mainly helps about a hundred kids in its neighborhood, all of whose parents have multiple cars to drive their kids to school anyways…

I actually missed the most interesting part, which was the budget discussion and voting. The district’s finance director now no longer believes that they’ll be able to rent out a school next year if they close one. So the new budget doesn’t include any actual revenue from closing a school, and is mum on the issue on whether or not they’ll reduce costs by closing a school (and, for example, eliminating jobs).

Which is a ray of hope. If they close a school, I still tend to think that they’ll close Slater. (And I can’t say I have an informed opinion about whether it would be less harmful to close Slater or Castro.) But maybe the budget news will give one trustee an excuse to shift her vote away from closing a school next year…

down with the State

March 16th, 2005

When we last left our refactoring saga, I was regretting having done a State extraction too early, and was about to throw it out, doing some more class extractions first. Which is what I did, and it was clearly the right decision; I now have some significantly smaller classes, and they’re a lot easier to test. Not perfect yet: one, in particular, has a .cpp file with about 500 lines, which is larger than I’d like, and I’m not completely confident in my unit tests for that class. But it’s a significant improvement: one turning point is that the tests have been easier to write than I expected, rather than harder to write.

So yesterday, I tried again to do the State extraction. And, again, I’m going to throw that work away! But I’m definitely learning: this time, I only went two hours into the refactoring before throwing away my work, which is much better than throwing away three days of work. And, honestly, that two hours of refactoring was really useful, even though I’m throwing it away: maybe I could have noticed another class to extract without doing that speculative refactoring, but I wouldn’t count on it. Spending two hours and understanding my code better at the end of it sounds like a pretty good deal to me.

At this point, I am wondering if I will ever get to use State, though…

school closure: castro community forum

March 13th, 2005

At the last school board meeting about school closure, they put forward a plan where the school to be closed would be Castro instead of Slater. (The latter being the school that Miranda goes to.) They wanted to give Castro parents time to complain about this, so they’re holding a couple of community forums at Castro, the first of which was Thursday. (Excellent idea – if only they’d done the same thing at Slater…)

Pretty interesting. About 30 minutes into the forum, there was a huge flap set off (mostly) by the fact that the board wouldn’t allow bilingual speakers to do their own translation. They were providing translations of everything into whichever of Spanish and English the speaker didn’t speak in; and they insisted that all speakers go through the provided translators. Which lead to an argument which ended with a five-minute break being called in the meeting. Apparently (I’ve subsequently learned) the genesis was that, in the past, people’s own translations haven’t always been accurate (and, in particular, have contained derogatory comments in the Spanish versions but not in the English versions), but the current policy seems like serious overkill to handle that issue: as it is, they’re guaranteeing that the translations are inaccurate. I would feel that way even if the translators were doing an excellent job; I’m sure they were trying their best, but they left a lot out, explicitly resorting to summarizing much of the time.

Anyways. A few people complaining about us Slater whiners. A lot of people talking about how wonderful Castro is. A lot of people talking about how awful an idea it is to close any school. (The last few weeks have seen the district’s financial officer say that she can’t count on being able to rent out a school next year if they close it, and have seen more projections of increasing student enrollment in a couple of years.) Several charges of discrimination. Right before the end were two very strong speeches by Slater teachers. One of the speeches might not have been the most politic in the world, but was interesting to me: our school district, the Mountain View-Whisman school district, got its ungainly name from the merger of two school districts three or four years ago; according to that teacher, the current behavior, motivated by budget fears and No Child Left Behind fears is much more characteristic of the Whisman school district’s pre-merger behavior than of the Mountain View district’s pre-merger behavior. (And it’s not a coincidence that PACT, the program that we’re part of, came out of the Mountain View district.) The other speaker did a great job of pulling all our points together, switching seamlessly between English and Spanish, and bringing the whole room to their feet with a standing ovation at the end.

One more community forum on Wednesday; a school board meeting the week after that. I think the decision is supposed to happen then, but I could be misremembering, and of course we’ve already seen that decisions don’t happen when scheduled.

processor speed

March 7th, 2005

I recently read an article by Herb Sutter that claims that the long rise in processor speed is finally coming to an end. I certainly believe that this is going to happen eventually, maybe within the next decade, because we do seem to be approaching some physical limits; I didn’t think that it was happening quite yet, though. Sutter does present some interesting evidence in favor of his argument: in particular, there’s a graph which shows that the CPU speed of Intel’s processors actually stopped increasing two years ago, and that, if the trends from before 2001 had continued, we’d now have CPU’s approaching 10GHz instead of less than 4GHz.

And it’s true, Intel’s march for higher CPU speed has stalled. The thing is, though, I’m not sure how much weight to give to that argument. My understanding is that, with the Pentium 4, Intel decided that people payed more attention to CPU speed than to other metrics of CPU performance, so they pushed chips’ clock speed even if, say, it sometimes took more clock cycles to carry out the same action. (Which is why AMD started marketing their chips by translating their performance to Intel’s instead of touting their own clock rate.) Given a choice between, say, a 2GHz Opteron and a 3GHz Pentium 4, I know which one I would take. So maybe Intel was playing tricks that are catching up with them now; I’d like to see graphs like that from other manufacturers.

And if you look at the Intel graph in that article, the current plateau isn’t the only change in behavior – at around 1995, the rate of clock spead increase actually increased. If you extend the older line instead of the newer line, then Intel’s current clock speeds don’t look at all out of line. And it does seem that other manufacturers will be hitting 4GHz soon – for example, the recent press releases about IBM/Sony/Toshiba’s Cell processor claim that it will reach that mark. (Admittedly, I’m not sure when it will be released, or how long after release it will take for 4GHz models to appear.)

Still, I do buy the larger point of the article, that to continue to get increased performance, we’ll soon need to switch to other techniques, of which the most interesting is multithreaded code on multicore processors. As a loyal Sun employee, I have to get behind this: my group at Sun is eagerly awaiting the release of dual-core Opteron processors, and Sun’s forthcoming Niagara SPARC processor is going to be a lot of fun to work with. I hope that, one of these years, I have an excuse to program in a multicore environment; my current software team does multithreaded programming, but we do it in a fairly naive way. (And there’s nothing wrong with that, to be sure: simple solutions are better, as long as they get the job done.) Programs are already marching less in lockstep and acting more like a collection of semi-autonomous agents; how far can we take this? Is the number of processors in a computer going to grow exponentially? Are the processors going to get simpler and less powerful while this happens, or is each individual processor going to support as complex a piece of software as those on today’s single processors? If so, it’s going to be very exciting seeing what complex dances the software on these processors trace out, and what unexpected phenomena arise from that.

Down with authoritarianism in software design; long live anarchist collectives!

go bibliography

March 2nd, 2005

I used to play go a lot, and I collected a lot of go books. In fact, by the time I was in grad school, I had copies of all but 10 or 15 or so of all the go books that had ever been published in English. (Just under 100 at the time.) The web was relatively young; I decided to start a web site devoted to go books. It was a lot of fun; my first real foray into writing on the web.

When I was a postdoc at Stanford, I didn’t have nearly as much time to play go: sometimes I would try to go to the local club every other week, but much more frequently I wouldn’t show up for months at a time. (My hands don’t allow me to play go online: I can type fine, but mouse usage kills me.) The go bibliography started to slip a bit: whereas before I got and reviewed each new book within a couple of months of publication, my goal was now to not fall more than a year behind. Which I was more or less able to do: I took the bus to and from work each day, and I often read go books on the bus rides.

I also found other things that I wanted to write about. For a little while, I had some pages on teaching; when I got this computer, I put up some pages about the process of getting it set up. I never found the time to keep them up, though; in fact, sometimes they never got far enough for me to publish them to the world at all (e.g. some pages on video games).

When I started work at Kealia, though, I stopped taking the bus to work (since there wasn’t a convenient route), which ate into my book reading time: and go books certainly aren’t my highest reading priority. And, after thinking about it for a while, I decided that while I did miss having an excuse to occasionally write something for public consumption, I didn’t really miss writing about go books. As my other abortive efforts made clear, though, I probably shouldn’t plan on writing about any other specific theme: anything too formal would pose a high enough barrier that I wouldn’t update it regularly, and my interests change frequently enough that a single-topic site would die pretty quickly.

But with blogs mentioned in newspapers almost daily, it was pretty obvious what I should do. So here I am. It’s sad to think that I may never add another review to the go book site, but such is life. (I’ve asked other people to contribute reviews: I don’t mind doing a bit of work on the site, if other people can help.) To be sure, I don’t really have an idea how long I’ll keep up this blog, but it’s lasted for half a year by now, I’m not getting bored yet, and I still have a backlog of things that I’d like to write about. I certainly feel better writing regularly: it gives me an excuse to think a bit more about certain things, which is always welcome.

more iPod comments

March 1st, 2005

A few random iPod-inspired thoughts:

  • The first time I imported a CD with iTunes, it reported doing it at a rate of about 5x. But the next time I imported a CD, it reported a rate of under 2x, and stayed there. And it was a real problem: it took all evening just to import a handful of CDs. At first, I cursed Windows and iTunes, but the truth is that I’d been thinking browsing on Linux was sluggish ever since I upgraded to Fedora Core 2. I’d blamed that on either the OS upgrade or on the browser upgrade, but now I had concrete evidence that the problem was more widespread than that. After a bit of thinking about possible causes, I went into the bios and told it never to adjust the CPU’s speed unless I was running on batteries; the problem was solved. (There must be something buggy going on with speedstep, though – the fans don’t come on all that often, even when I’m running at full speed all the time, so why was it so persistently slow?) In retrospect, what probably happened was that I upgraded the bios at the time I upgraded the OS (because early bios versions on this computer had a bug that caused time problems), and my old bios settings must have been lost. So hurray for iTunes – without that speed rating, I’m not sure if I ever would have gotten around to looking at the bios, and my web browsing (and blogging!) would still be horribly slow.
  • The iPod has this feature where it remembers what songs I’ve listened to, and how often. It can use this to do things like give you a random playlist with your favorites more heavily weighted; nice idea. The thing is, though, it seems to periodically forget that I’ve listened to music: when I sync it with my computer, it forgets stuff that I listened to since my last sync but more than a few days ago. Very strange – you’d think this sort of information would be stored on the hard drive and never lost.
  • It also frequently forgets what I’m in the middle of listening to, if I’ve stopped it in the middle of an album. A bit of experimentation suggests that maybe it remembers better if I pause it and let it go to sleep by itself, but it forgets if I hold down the pause button to put it to sleep more forcefully. But why should it forget in either situation? My car’s CD player can remember where I was last listening to a CD, and it doesn’t have a hard drive to store that information. So why can’t the iPod do just as good a job?
  • I still have yet to stump the CD database that iTunes uses.
  • I’m really glad I got the iPod. Jogging is a lot more fun, and I really do like listening to music. I’m finally buying CD’s again: I accumulated hundreds when I was an undergrad, but had bought almost none in the intervening decade, and that’s a shame.

tokyo godfathers; movies

February 26th, 2005

We just watched Tokyo Godfathers; very good. About three homeless people who find an abandoned baby, and try to track down its mother and learn why she abandoned it; good characters, good plot, good visuals, pleasantly bizarre.

Hmm: that wasn’t much of a discussion of the movie, was it? The problem is, I’m really not very good at talking about movies. I could give a plot description, but I’m not sure what the point of doing so would be. I’d rather have something a bit more insightful to say, or at least something a bit more analytical. I don’t claim to be god’s gift to video game criticism, but at least I can blather along about the things for paragraphs; not so with movies. I’ve seen a reasonable number of movies (perhaps not so many in recent years, but then again the movies I’ve seen I’ve seen over and over again, which should mean something); I guess the point is that I spend more time thinking about the design of video games as I play them. And, for that matter, I spend lots of time reading video game web sites, so I’m much more exposed to video game criticism than movie criticism. (What are good movie web sites? Also, what are good music web sites?) So I should think more as I watch movies, and not be afraid to write about them, I guess; with practice, I’ll have more to say.

Fortunately, I should have more movie-watching time soon. For years, we’d basically only been able to watch movies that Miranda could watch. But once she started school, we moved her bed time up (or really, gave her a bed time different from ours at all), giving us time that we could watch TV by ourselves. Unfortunately, at about the same time, we bought our disco duro, and we kind of overdosed on Iron Chef and Good Eats. But recently we’ve moved her bed time still earlier, and a significant portion of the Good Eats episodes are ones we’ve seen recently, so we’re plowing through our backlog of recordings.

(The thing I miss most about Boston: the Brattle. Also, why, in my first paragraph, did I not mention that it was either Japanese or animated? I guess I didn’t want to overemphasize either of those facts, given the brevity of the paragraph: I wasn’t up for a comparison of it with anime, or for that matter non-Japanese animation (The Triplets of Belleville; I guess I didn’t talk about that when I first watched it? Maybe I wasn’t blogging yet).)

refactoring twists and turns

February 25th, 2005

(Warning: really boring post follows. This is what’s been on my mind today, but Jordan will probably wish that I would go back to talking about Java.)

There’s this big monster class that I’ve been dealing with at work almost ever since I got there. About a month or two into my job, I had to try to write unit tests for it, and failed miserably. (Because, after all, it’s a big monster class, exactly the sort of class for which the notion of “unit test” is ridiculous.) I did a bit of refactoring at the time, but for various reasons (I wasn’t very good at refactoring, and I wasn’t in charge of the code), it thoroughly repulsed my efforts at civilizing it. I did get a bee in my bonnet that State might be a useful design pattern to use, though.

A year later, I’m now in charge of the code, and it’s been causing me a fair amount of pain (in the form of seg faults, and sleepless nights about the thought of having to add new functionality, as I’ll have to do over the next few months). It actually was a pair of messy classes; I spent January matching wits with the first one. It certainly fought back – near the beginning, for example, I thought I had a nice bit that made sense conceptually to extract as a separate class, and I found a couple of method that looked like a perfect entry point into that section of code. But after spending a few days trying to get that to work, it just got worse and worse: those two functions called functions that called functions that referred to data that I thought was outside of the class in question, and it was really hard to tangle the code apart. So I had to throw away those days of work, and try again. (The second time was the charm, though.) And, actually, while that code is hugely better now than it was, I still wasn’t able to properly tame and test some of the core algorithms…

So this month I’ve been dealing with the other one of these messy classes. At the start, finding refactorings to do was like shooting fish in a barrel – anybody who can’t find methods to extract from 100-line functions isn’t looking very hard. (Let’s start by having every method fit on a single screen…) So far, I’ve extracted three nice little classes from it: they’re quite coherent, much easier to understand, much better tested, and I of course found several bugs in the process. One of the extracted classes, in particular, was an absolute joy to refactor: I knew that I had extracted a coherent chunk of data and methods, but I was having the hardest time figuring out what it was actually doing. This made adding unit tests a surprising pain; I ended up stopping thinking about what the class should do, and just mechanically writing tests that pinned down the class’s behavior, without worrying about how to interpret that behavior. (Well, almost pinned it down – fortunately, I kept around an end-to-end test as a backup, which saved me at one point in my later refactoring.) And when I got to the refactoring proper, I decided to do it strictly by the book, making really small, mindless little changes, and consciously trying to avoid look to far ahead, never making a jump when I could find baby steps that would get me there instead. And it was beautiful: the badness just melted away, the code almost transformed itself, and at the end of the day, everything made sense, and was clear to anybody reading it. I’m completely sold: tiny refactorings for me from now on.

But a week or so ago, I ran out of obvious classes to extract, so I decided to do the State extraction that I’d had on my brain for the last year and a half. I did it (by tiny steps), but while I liked the code more that way, I was pretty sure that most of my coworkers, when reading the code, would find it less clear. So I didn’t check it in: I wanted to spend more time moving functionality into the new State objects, and hoping that other refactorings would become obvious to me as I did so, so that by the time I checked it in the code would be clearly improved.

A couple of days later (I started a week ago, but most of this past week I was at a conference) I’ve got a lot more functionality moved into the State objects, but it’s still not a clear improvement. As I hoped, though, I am starting to see other refactorings that make sense, that really do improve the code.

The thing is, though, these other refactorings don’t depend in any way on my State extraction: in fact, if I hadn’t had this State bee in my bonnet, I would probably have seen them a week ago. Look at all these methods that take 6 arguments – maybe I could get rid of some of them? Look at these three member variables whose names all start with “uncopied”, with a big comment before them explaining how they’re used together – maybe I could extract those all into a class? And, as I do the extraction, I find what must be a bug in the code, but I’m so deep into refactoring upon refactoring that I have a hard time stepping back, figuring out what’s going on, and writing a test to expose the bug.

So I’ll be throwing away the last three days of programming: starting over, doing the obvious refactorings that I came across today, and probably creating a big parser wrapper class whose State I will only extract after I’ve gotten it into a coherent whole. Sigh.

Which, actually, isn’t so bad. At least I can say that I’ve learned something about programming over the last couple of years: I’ve learned that I should be nervous if I want to go for three days without checking in my code, and that it’s better to throw away that work and start over from scratch than keep on forging into the muck. And I’m sure that I’ll hit the ground running next week: having struggled with the code today, I’ll be able to do the first few refactorings in a flash, and it will only take me a day and a half to recreate the work that I’ve done in the last three days. (But there will probably be another week of work inserted into the middle of that day and a half of recap, as I do more refactoring to get it into shape before doing the State extraction.)

The next post will be a non-programming one, I promise…

a few last Java comments

February 24th, 2005

A few random thoughts about Java (to get out of the way so I can stop boring Jordan):

  • I’m not thrilled with the whole checked/unchecked exception thing. Checked exceptions, to be honest, seem like kind of a pain to me: they make your code more verbose, but I don’t yet have reason to believe that they catch much in the way of problems. I suppose it makes more sense in Java than in C++: in C++, destructors make it easier to handle exceptions safely, and templates would interact really badly with checked exceptions, so maybe as I gain more experience with exceptions in Java, I’ll come to appreciate them more. Or maybe not. I’m kind of tempted to make all the exceptions I define be unchecked, but it’s probably better to stick with more idiomatic techniques, even if they are a bit of a pain.
  • I like the HTML documentation that comes along with the language: it’s great to be able to just look up all the standards classes and their methods whenever I have a question.
  • I’m not convinced that package visibility is a great idea, though I will reluctantly accept that some form of encapsulation breaking is necessary at times, and package visibility is probably about as good an idea as friendship. What is definitely screwed up, though, is making it the default, instead of requiring people to specify it explicitly with a keyword. Also, protected shouldn’t imply package: the two concepts are simply orthogonal.

Anyways, that’s the end of my Java notes; I should get back to actually programming in Java, so I’ll have more things to talk about. I’m also tempted to start using a modern IDE: I’ve been doing a lot of refactoring recently at work, and I feel uncultured never having used an automated refactoring tool. So maybe I’ll give Eclipse a try. The main thing I’m worried about is that an IDE might make me use the mouse more than my hands like; I think Eclipse has pretty good keyboard shortcuts, though.

I was at a Sun conference this week, so now I feel guilty about wanting to use Eclipse instead of NetBeans. But it really did sound like NetBeans’ refactoring support is pretty meager – in particular, it doesn’t even support Extract Method – so Eclipse it probably is. Interesting conference; I spent a lot of time talking and learning about agile methods (well, about extreme programming, really). I should post on that some time.

old-time religion

February 23rd, 2005

There was a good article in The Nation recently about religion and our founding fathers. Normally, I don’t pay much attention to that sort of thing – I’m aware that present-day religious fanatics would like to paint our country as inherently steeped in religion, and that many of the founding fathers could be better described as deists – but I see that I underestimated our founding fathers. In fact, as the aformentioned article tells us, the Treaty of Tripoli, signed in 1797, says

As the Government of the United States…is not in any sense founded on the Christian religion–as it has in itself no character of enmity against the laws, religion, or tranquillity of Musselmen–and as the said States never have entered into any war or act of hostility against any Mehomitan nation, it is declared by the parties that no pretext arising from religious opinions shall ever produce an interruption of the harmony existing between the two countries.

And this was no minority opinion, either: the Senate approved the treaty unanimously, only the third time in the Senate’s history that it did so. (Out of 339 total votes at the time.) How things change.

I actually recently realized that Buddhism has somehow become the only religion that my brain is willing to treat seriously. (Unless you count Taoism as a religion; I certainly have enough translations of the Tao Te Ching around the house…) I’m not sure how long that has been the case, but I was reading a (quite good) book of medieval Japanese stories whose introduction spent some amount of time explaining Buddhism in Japan, and it all seemed perfectly natural to me. Not that I’m about to become a Buddhist myself (though I would like to spend some time studying meditation at a Zen center), but my brain thinks it quite normal to read about Buddhists, whereas that part of my brain is incapable of treating people who believe in, say, monotheistic religions in the same way. Odd…

bruno latour

February 20th, 2005

I’ve been a big fan of Bruno Latour for a long time now. He started off working in sociology of science, with some very insightful observations of scientists in action, of how science is actually produced and how scientific facts become accepted. Which is not quite the way it is presented by many scientists (or epistemologists); a decade ago, Latour made a foray into philosophy with his book We Have Never Been Modern, to talk about this disconnect. For me, that book was both fascinating and frustrating: Latour did a good job at pointing out some of the absurdities that you see in a certain form of mysticism of Science that was depressingly prevalent around the Sokal affair, and he clearly had some good ideas about a better way to approach epistemology, but those ideas hadn’t yet gelled into a coherent picture.

With his latest book, Politics of Nature, however, he’s finally put it all together. His basic complaint is that mythologists of Science buy into a version of the allegory of the Cave, with an eternal world (Nature, in the case of Science) out there that humans can’t perceive directly. But scientists are supposed to have a unique ability to transcend the social world, producing eternal truths about Nature that nobody is permitted to question, that have no analogue anywhere in the social world. And there’s no clear explanation how, if Nature and the world of the Cave (the social world) are so fundamentally different, how scientists manage to have a unique insight into Nature, giving them a special access to eternal truths that the rest of us can’t manage.

Latour’s explanation is that the (quite valuable!) work that actual scientists do is rather more prosaic and complicated than this mythical picture. Scientists don’t go from direct perception of nature to direct perception of unquestionable truths (and, to be sure, nobody claims that they do this: some form of scientific method is always presented as an intermediate step). Latour proposes analyzing this process through the following four steps:

  1. The requirement of perplexity. An investigation must consider the evidence that appears, no matter how surprising or unpleasant it might seem.
  2. The requirement of consultation. Propositions can’t be considered in isolation: they have to be examined by an appropriately constituted jury.
  3. The requirement of hierarchy. You can’t just accept a proposition as true by itself: it has to fit together with other propositions that you wish to consider true.
  4. The requirement of institution. Once a proposition has been appropriately placed in a hierarchy, it should be accepted, rather than constantly challenged.

The requirements of perplexity and institution are most similar to the naive version of nature and scientific laws: the requirement of perplexity is where we see nature making its presence known (if you jump out of a third-story window, you will fall down, whether you like it or not), and the requirement of institution is where scientific laws get accepted as truth. The journey from the first requirement to the fourth, however, is a quite interesting one.

The second requirement, in particular, is more subtle than it might seem. Latour’s juries aren’t made entirely (or even largely) out of humans: this isn’t some sort of simple peer review, where scientists read each other’s papers and decide whether or not they agree with the results. Instead, members of the jury can be nonhumans (and in particular can be propositions, a term which Latour uses in a quite general sense): so some of the most important members of the jury that is consulted with respect to a candidate proposition consist of measurements that are made that support (or don’t!) the proposition in question. In fact, arguably the greatest strength of science, and the reasons why scientists are much better at coming up with durable propositions than the rest of us, is the remarkable ability that scientists have to assemble a huge jury of nonhumans to pass judgment on a candidate proposition.

Another point of Latour’s is that this process isn’t a one-time thing: when one round of perplexity/consultation/hierarchy/institution has concluded, it opens itself to another round of the same process. So, for example, when a scientific theory becomes accepted, it might cause scientists to be perplexed by facts that they hadn’t noticed before. And the newly instituted facts might enable scientists to build new measuring devices, increasing the size of the juries that they can bring to bear in later consultations. So hierarchies of scientific theories grow and grow through repetitions of this cycle. Also, while propositions that are instituted at the end of one cycle usually remain instituted at the end of the next cycle, that’s not always the case: they may end up being refined, having their scope limited, or even being outright rejected at the end of subsequent iterations.

There’s nothing surprising about this analysis, of course; it’s a reasonably level-headed description of how scientists work, with a somewhat idiosyncratic choice of language and focus. (He uses many words in a somewhat specialized fashion; fortunately, he flags these words with asterisks in the text, and provides a glossary at the back, so it’s usually clear enough what he means.) Its real strength, however, is that there’s nothing about this four-stage analysis that is at all specific to science. In fact, he shows how various professions (politicians, economists, moralists, etc.) can fit their work quite naturally into this framework. So it gets us nicely past any sort of mysticism of Nature and Science: if you want to focus on facts that manage to perplex in a certain especially potent fashion, or on facts that are instituted in a particularly durable way, you’re welcome to do so (and it may even be useful to do so), but there’s no particular benefit in turning that into a transcendent distinction.

I’ve actually started looking at all sorts of aspects of my life through this lens. For example, the problems that I have with the school closure process can largely be traced to problems with the board and administration’s perplexity and consultation: they’re picking and choosing between facts, ignoring the ones they don’t like instead of being properly perplexed by them, and they’re not constituting appropriate juries (both human and otherwise) to review their propositions. Another example is in software development: the software development strategies that I prefer these days can be viewed as focusing on increasing the size of various relevant juries (providing automated tests to verify that your code does what it’s supposed to do, bringing in more voices to help you decide which features to add to your software, which not to add, and how best to implement them) and on increasing the rate at which the entire process cycles through, allowing you to institute propositions (i.e. release software) much more quickly, building up a large hierarchy of propositions (working, stable software fulfilling a well-chosen set of feature requests, doing so in a maintainable and extensible fashion).

school closure: not done yet

February 18th, 2005

I was sure that the school board was going to make a final vote on the school closure issue last Wednesday. The school board, however, has managed to surprise me at every other meeting on the issue; I don’t know why I expected anything different this time.

They did vote to close a school. Which, I think, was a mistake: it looks like there’s enough money in the budget to keep a school open next year, and I think there are good reasons to do so. They did not, however, vote on which school to close. They had set their sights on Slater, my daughter’s school; last night, however, they decided to explore the idea of exploring another school, Castro (which I, not entirely coincidentally, used to live half a block away from). It’s a school with a huge proportion of English-language learners, in a neighborhood where families frequently move in and move out (as we did); unsurprisingly, the school dosen’t have as high test scores as others in Mountain View, and the board has been trying for years to figure out how best to help the students there. And it looks like some board members are ready to throw up their hands and send the Castro neighborhood kids to schools with more native English speakers.

So closing Castro is now a possibility, but no decision has been made. To their credit, the board is behaving well with regards to this new twist: they’re putting the final decision off for a month to give the Castro community time to respond, and they’re holding a couple of community meetings at Castro. So that’s all to the good. I’m mad at them for closing any school at all, though. And if they can hold community meetings at Castro, why can’t they hold community meetings at Slater as well? (The demographics of the neighborhoods aren’t all that different, after all.) I honestly don’t know which school would be less harmful to close: I’d prefer that they close Castro instead of Slater, but obviously I’m biased. And I don’t really know what’s going on with this plan: maybe it’s (intentionally or not) more of a ploy to act like they were listening to the issues that Slater parents raised, but handled in such a way as to guarantee a community outcry in support of Castro, causing them to end up closing Slater like they wanted. And, at this point, the last thing I’m going to do is try to predict what the board will do a month from now: I’m completely incapable of predicting what they’ll do a week from now, let alone a month from now.

There are some other twists and turns, the most bizarre of which is that the “close Castro” proposal was actually put forward by some parents in the Spanish/English Dual Immersion program, which is currently located at Castro! Which might sound like a good argument for closing Castro – if current families there want to close it, then why keep it open? – except that I don’t think that the families proposing the plan actually live in the Castro neighborhood, so they wouldn’t be losing their neighborhood school. (Though I don’t live in the Slater neighborhood, and I, like all other families I’ve heard of with kids attending Slater, certainly don’t want Slater to close.)

My apologies if this drama is a bit boring to those of you who don’t live around here. (As opposed to my other blog entries on other topics, which are of course fascinating to all!) You’ll have to bear with me for another month or so, I’m afraid…

mary poppins

February 14th, 2005

Liesl’s dad gave Miranda a copy of Mary Poppins for Christmas. And it’s great! I was starting to suspect that I might be a Julie Andrews fan, but now I’m sure of it (and should really go out and watch other movies of hers). I had no idea how good Dick van Dyke was, though, and for that matter I’d never considered the possibility that he might have been young once.

The other big surprise is the songs. I was, of course, aware of some of them: “A Spoonful of Sugar,” “Supercalifragilisticexpialidocious”, “Let’s Go Fly a Kite”, “Chim Chim Cher-ee”. (I’m honestly not sure if I would have identified the source of the last two, though.) But they’re singing all the time in the movie: looking through the list of scenes, there are 13 songs in the movie! It’s a long movie for a kid’s movie, but that’s still one every 10 minutes or so. And they songs are all at least pleasant; there’s a reason why I was aware of the ones I mentioned above, but the others are far from shabby, and Miranda and I quite enjoy dancing around to “Step in Time”.

And it has a good story, good politics, and I really like the way the Dick van Dyke character (and his mix of jobs) are presented. Good thing I have a daughter with a grandfather with good taste in movies!

(Incidentally, Miranda has recently been asking to watch a DVD of The Marriage of Figaro. I really do think she could have a good career in musical theater ahead of her, if she so choses…)

metroid prime 2: echoes

February 13th, 2005

Today’s video game is Metroid Prime 2: Echoes. The first Metroid Prime game was my favorite Gamecube game, and was a real eye-opener for me. It was my introduction to the Metroid series, so I’d never seen its particular brand of exploration before. Games in the series take place in one big world; it’s divided up into different regions with various themes (hot, water, etc.), but there’s no other distinction between areas (no buildings, dungeons, etc.). And you can always go back to areas you’ve seen before, and in fact frequently do. The reason for this is the way they open up areas: you can’t go everywhere from the beginning. Instead, you periodically (usually after (mini-)boss battles) earn upgrades allowing you to move in different ways, or giving you weapons that let you destroy barriers that you couldn’t destroy before.

So every time you get an upgrade, new areas open up for you. Usually there’s one primary largish area that opens up to you, giving some direction in the game. (This was much more the case in Metroid Prime 2 than in earlier Metroid games.) But there are always lots of things that you can do in old areas, too, typically allowing you to get hidden items (ammo/health expansions) that you couldn’t get before.

There’s a lot of shooting in games in the series, but it’s handled exceptionally well. Your primary weapon has an unlimited supply of ammo, so while getting ammo expansions is important, you almost never are seriously lacking in ammo as long as you’ve done at least a cursory job of finding the expansions. (The one exception is boss battles: these are where having a lot of ammo can really help.) And ammo refills are plentiful. Similarly, your health gets refilled every time you save, and health refills are plentiful, so outside of boss battles, health isn’t a serious issue. Ammo and health aren’t irrelevant by any means: typically, the enemies in the most-recently-opened area are tough enough to keep you on your toes, preventing you from spending all your time looking around. But if you return to an area after initially opening it up, you’ve usually progressed to a point where the enemies aren’t much of a challenge, allowing you to spend your time exploring and figuring out what new things you can do that you couldn’t do the last time you were in the area. (For that matter, if you’re just travelling through an area that you’ve been to before in order to get somewhere, you can usually ignore the enemies, because they won’t do enough damage to really matter.)

That’s the way all the Metroid games work; I’ve now played the 2D ones on the GBA (though I’ve still yet to play the apparently-excellent Super Metroid), and the 3D ones on the Gamecube, and they all have the same excellent balance of exploration and fighting. In particular, the series made the transition to 3D extremely well, and I like some of the features that they’ve added (like the scan visor, which allows you to get information about your environment and bring more of a story into the game). The boss fights are, in general, quite well designed, challenging but not tedious. (Most of the time, even if you die, you can figure out more about the enemy’s vulnerabilities, letting you do better the next time.)

So: what about the latest game in the series? It’s great, it really is. I thoroughly enjoyed playing it, I found myself happily going back through areas trying to find all the secrets, I liked almost all of the boss battles (and rarely looked up help for them online). The one exception to the latter is the final boss: it really annoys me when games not only give a final boss that’s quite difficult (which, by itself, can be reasonable enough: you get better at fighting bosses as the game goes on), but then follow it up with a second final boss without giving you a chance to save in between, so you have to spend twenty minutes beating the first boss again each time you need another crack at the second boss. As a result, I haven’t actually finished the game: I’m sure I could, but I also suspect it would take me maybe three hours, much of which would be rehashing the same battles over and over again, and frankly I have much better uses of my time. (Like, say, playing the latest GBA Zelda game or playing Jak 3.)

On the other hand, it didn’t bring anything new to the series. This particular game’s “feature” is that there’s a dark world and the light world (and no surprises as to which one the bad guys are from), so you get to travel through (most of) the map in two versions. It’s not a particularly original idea (we saw this in Zelda a decade ago), and the execution is pretty unexceptional. They wanted to make the dark world bad, so it hurts your health to be there, except that then it would be impossible to explore in the dark world, so they gave you light bubbles in the dark world that restore your health, with the end result that the dark world is actually easier on the health than the light world! (So, when fighting most dark world bosses, your first tack is to figure out how to avoid enough damage so that the light bubbles balance out the damage; once you’ve done that, you know you can fight the boss for as long as it takes for you to figure out how to beat it.)

All in all, I feel about this game the same way as I felt about Banjo-Kazooie: it’s a great game that does everything its excellent genre predecessor did well (where the genre predecessor in the latter case was Super Mario 64). But if I see another game like this, I will lose patience fairly quickly (as happened with Banjo Tooie). Unfortunately, in the case of the Metroid series, I don’t have any constructive criticism here: I can’t quite figure out a way that the series should progress that wouldn’t turn it into something significantly different. We can imagine, say, taking its putative “bounty hunter” theme and turning it into a multi-planet adventure game, or something, but that would be a completely different game with the same name. I guess I can imagine setting Metroid in a city instead of a cave: that could let them preserve the same “go anywhere you’ve been while opening up more and more of the game” feel, while opening up more room for interaction. And who knows: the fine people at Nintendo have successfully reinvented their various series often enough that they may well come up with a surprising, amazingly successful way to invigorate this one. On the other hand, recently they’ve been rehashing their series more often than not; I can’t say that I’m too hopeful for the future of the series. We’ll see; it’s got a solid enough foundation that I’ll put up with a few more rehashes, I suppose. And those of you who have never played any games in the series should run to your local video game emporium: it’s a great series and, unlike most other great series, it hasn’t spawned a flock of other games where you would have seen the same ideas.

Update: when skimming an earlier post, I see that I forgot to mention Metroid Prime 2‘s amazingly bad menu system. Menus are presented as a 3D shape with a dot in the center with lines coming out of it, with labeled balls attached to the lines: the ball that is closest to the “front” is the currently selected item. The thing is, when I first saw this, my brain didn’t have enough information to parse it as a 3D object: it just saw it as a collection of longer and shorter lines, whose length and position would change in a fairly random fashion in response to joystick movements. I got used to it after a little while, but game designers should know better: this menu has absolutely no benefit over a traditional 2D system, it’s not “cool”, it’s just gratuitously confusing to newcomers.