[ Content | Sidebar ]

morality

November 30th, 2004

George Lakoff had an interesting article in The Nation recently, called “Our Moral Values”, where he analyses progressives versus conservatives in terms of the morality expressed by a nurturing family versus the morality expressed by a family with a strict father, and gives some tactical suggestions based on that. Pretty sensible; I should really read more of his stuff, and I know some of my friends have been into his linguistics books. I wonder how Lakoff’s split compares with Jane Jacobs’ division of morality in Systems of Survival? I should reread that book. (Looks like she has a new book out, too.)

It also reminded me of another Nation article from last year, “A Nation of Victims”, by Renata Brooks. She talks about how Bush uses dependency-creating language, charactistic of the linguistic tricks abusers use to get their ways. The strict father/husband gone horribly wrong, basically.

dvd/hdd player

November 28th, 2004

We bought a new DVD player (and recorder, not that I really care) last month, with a built-in hard drive. Noteworthy aspects:

  • In Spanish, it’s called a “Grabador de DVD con disco duro”. This amuses me, for no particular reason.
  • It is nice having a DVR, though we watch little enough TV that we’re not getting so much benefit yet out of the “pause live TV” aspects. We’ll probably get more use out of it next baseball season.
  • It doesn’t have all the fancy Tivo stuff, but it also doesn’t require a monthly subscription. Which is definitely a good tradeoff, as far as I’m concerned – I’m morally opposed to having to pay a subscription fee to use my electronics. It does grab a program guide off the cable signal; it doesn’t do as much with it as I think it should, but it’s still pretty useful.
  • One of the reasons why the program guide is particularly useful for us is that the only stuff we normally record is Food Network programs (Iron Chef, Good Eats; speaking of which, Gear for Your Kitchen is pretty good); we can’t tell it to record all episodes of those shows, but we can tell it to record at certain times if those shows are being broadcast but not otherwise, and it’s easy to browse all food-related programs.
  • Having a large hard drive (ours is 120GB) is surprisingly useful.
  • It’s kind of ironic that it can’t record digital TV, given that it has an MPEG decoder (and encoder) built in. You’d think that somebody would make a DVR that can also decode HD broadcast signals, given the plethora of HD monitors.
  • It crashes occasionally. Which isn’t something that I want in my consumer electronics. But it hasn’t caused any real problems yet.
  • The DVD player remembers where we were last watching basically every DVD that we’ve ever put into it. Which was a little freaky at first, but is pretty useful once we’re used to it – e.g. we can switch between Miranda’s and our DVD’s.
  • It has a slow response time to button presses, and a really slow response time to being turned on or off.

the singing detective

November 27th, 2004

Now that Miranda’s bed time has moved up (since she no longer takes naps at daycare), we’ve finally been able to watch movies not suitable for 5-year-olds. We usually can’t finish a whole movie in a single night, and most evenings we watch various Food Network programs that we’ve recorded instead of movies, but at least we finally get to watch movies some of the time.

The most striking one we’ve seen recently is The Singing Detective. Which isn’t actually a movie: it’s a BBC TV series, and is quite unlike anything else I’ve seen. Skin disease, alternation between fantasy and reality, hallucinations, music (frequently at inappropriate times, though not so inappropriate as in some movies). For the first two or three (out of six) episodes, I didn’t really know what to think, but it all comes together quite nicely at the end.

So why do DVD’s feel compelled to include extras after extras? Books never include extras; CD’s include a little booklet, but it’s hardly the same thing. Personally, I basically never look at the extras of DVDs – why would I want to do that instead of, say, watching the movie? (Or watching a different movie.) I guess it matters to people who are big fans of the movie in question, though, and it’s a relatively low-cost addition (at least compared to the cost of making the movie itself…)

kent beck

November 20th, 2004

I just finished reading Kent Beck’s Smalltalk Best Practice Patterns. Not because I’m about to start programming in Smalltalk – it would be an interesting language to experiment with, but I’m way too busy for that right now – but because I really wish I could program like Kent Beck.

This book had a couple of striking examples along those lines. One way that he suggests using the book (not the only way, just one way) is this: every time you are about to do something, you should look in the book for a relevant pattern (unless you know it already), and use it. Which might sound ridiculous if your only exposure to patterns is the Gang of Four book, but Beck presents many many more patterns, going down to a much lower level: he has patterns (or “patterns”, since I’m not sure everybody would agree that’s an appropriate label for them) covering such matters as variable naming and indentation. (He doesn’t insist that everybody agree with him on how to indent; he does think, and I agree, that it’s useful for a team to be consistent on such trivial issues.)

The point here is that he’s very successfully weaved his patterns into a pattern language covering a large amount of ground: if you have a basic idea of what to do in the large, then there’s a pattern that’s applicable, and that pattern in turn calls on smaller patterns for its implementation, which call on still smaller patterns. The theory (which I believe) is that, one you have a pattern language internalized, you can produce quite good code without having to think about it too much most of the time, allowing you to devote your brain cells to decisions where more thinking is necessary. You can see an example of this with larger scale patterns in his and Ralph Johnson’s article “Patterns Generate Architecture” in Kent Beck’s Guide to Better Smalltalk.

The other part in the book that made me say “man, I wish I could program like this” was the following bit from Method Comment:

Someone recently asked me point blank, “What percentage of your methods have comments?” I answered, “Between 0 and 1 percent.” Oh the uproar! As a sanity check, I asked a developer at one of my clients (where I had taught Smalltalk based on an earlier version of these patterns) what percentage of the methods of their 200 class system had comments. His answer, “between 0 and 1 percent.” “Has that ever been a problem?” “No, never.”

I just wish my methods were so transparent… I remember reading the refactoring book and coming across the place where Fowler says that comments are a sign of bad code. My first reaction was “that’s ridiculous!”. And, of course, if code is unclear, comments are a good thing; but what’s even better is to take the comments as a sign that the code needs to be refactored to make it clear, at which point the comments are probably no longer necessary.

One other predictor of bad code in Beck’s book that I hadn’t seen before was to check rates of change: don’t have a method or a class or whatever with some stuff that changes all the time and other stuff that never changes. A few days after reading that, I had a chance to apply it: I’m trying to refactor a ridiculously large class (something like 25 member variables, a constructor with about 10 arguments, etc.), and I noticed that some variables (corresponding to MPEG headers, basically) were basically set once and never changed. So I pulled some of them out into a class (a coworker wisely suggested that I not pull out all of them at once: pull out the elementary stream headers into once class, and pull out the transport stream headers into another class later), and sure enough, once I’d done that, I noticed that several methods in the original class only used those member variables and could easily be moved to the new class, and those were the only methods that accessed most of the member variables. By the time I was done I’d removed five or six member variables and two constructor parameters from the original class, had a delightfully small, self-contained, and testable new class, and was much happier with life. Of course, I could have been led to this refactoring through other routes (e.g. noticing that the variables in question were used together in certain methods), but the “rates of change” heuristic worked just fine.

cities and overworlds

November 19th, 2004

I’m (well, we’re, but more about that some other time) in the middle of Paper Mario 2 right now, and it’s setting off such a cascade of reactions, I figured I’d better start posting about it now instead of waiting until I’m done with the game. It’s not that the game is so stunningly excellent – I quite like it, but no more so than several other games – but rather that it’s interestingly and productively different from other RPG’s out there.

I had been lumping this game together with its N64 predecessor Paper Mario (for obvious reasons) and with a GBA game from a year or so ago, Mario and Luigi: Superstar Saga. And while its gameplay has only minor differences from the original Paper Mario, it’s actually quite different from Superstar Saga, and, frankly, a good deal better. (Don’t get me wrong, I enjoyed Superstar Saga.) The difference that is relevant to this entry is one that I’ve also noticed when comparing handheld Zelda games with their N64/Gamecube compatriots.

Superstar Saga and the 2D Zelda games divide the non-dungeon parts of the world into a rectangular grid. Some of the grid contains the various cities and towns; the rest of the grid is the overworld. This overworld typically has walls of trees, waters and other barriers limiting your navigation through the world, and making it somewhat tricky to avoid wandering monsters. Sometimes, the overworld can have interesting enough puzzles; most of the time, it’s just boring.

In the 3D Zelda games and in the Paper Mario games (which have flat sprites but are more 3D in other ways), the overworld has quite a different effect. The 3D Zelda games may have just as much, or even more (I really have no idea) overworld as their 2D counterparts, but it’s much less oppressive. (Well, at least on the N64: Wind Waker, the Gamecube Zelda game, had way too much overworld to slog through.) You don’t run into artificial barriers every couple of seconds, there aren’t grid transitions constantly popping up, it’s much easier to avoid wandering monsters if you’re in a peaceful mood, and you have a nice, fast mode of transportation to let you go through the game. The Paper Mario 2 overworld behaves much more like a 2D overworld in terms of dividing the world up into a grid and having artificial barriers everywhere, but there’s an incredibly small amount of overworld, so it turns into a pleasant diversion instead of a chore.

Also, while I in general prefer cities to overworlds, it’s also the case that, for reasons that aren’t entirely clear to me, I much prefer the cities in the 3D games to the cities in the 2D games. There just seem to be more interesting people to talk to, more nooks and crannies to explore, more favors to do for people. This last factor should not be minimized: in particular, one reason why Majora’s Mask was so wonderful was the little notebook of things to do other than make progress the main plot. It gave you things to do while wandering around, it recognized (and bounded) those things (the presence of the notebook makes a difference, too), and some of those tasks fit into story arcs of their own (I probably got as much satisfaction out of setting up the wedding in that game as in actually beating the main boss).

Which made me wonder: wouldn’t I prefer an adventure or RPG without any overworld at all, where the whole game was one big city? And, of course, I have played such a game: Shenmue. Which rocked. Ever since its sequel came out for the Xbox, I’ve wanted to get the console; but buying a whole console just for one game seemed a bit much. So I kept on waiting for another must-have game to appear on it. I think, however, that the time is fast approaching when I will give in and admit that yes, I do want to play Shenmue II enough to justify buying a console just for it. Don’t get me wrong – I’m sure I’ll buy other games for it once I have the console – there just aren’t any other games that will leave a void in my soul if I never play them.

I also wonder if the Grand Theft Auto series has the all-city feel that I’m looking for. I’ve stayed away from the series largely because I’m not sure I’d want to explain them to Miranda, but I do feel uncultured for not having played them. Of course, they’re not adventure / RPGs to begin with, but they might have an appropriate mix of exploration, task completion (like a modern platformer but more coherent), plot, and what not to make me feel happy.

In this discussion, I have of course ignored the fact that RPGs have not only cities and towns but also dungeons. To that, I will say that, while I do enjoy a well-constructed dungeon, the 4 dungeons that recent Zelda games have had instead of the traditional 8 (or whatever the number is) is, I think, about the right number from my point of view. Though the dungeons in The Ocarina of Time really were a lot of fun…

virtual functions and access control

November 15th, 2004

I was just reading Exceptional C++ Style, by Herb Sutter, and one of the recommendations (Item 18) threw me for a bit of a loop. That item talks about access control for virtual functions. (We’ll ignore destructors, since that’s a special case.)

My habit is to provide public virtual functions if I want all of the corresponding public functionality to be polymorphic; if I want to provide polymorphic behavior on a finer grain, I provide protected virtual functions. I never provide private virtual functions: I’m aware that, in many of the situations where I use protected virtual functions, I could use private ones instead, but I’ve never particularly seen fit to do so.

Sutter disagrees with me on two counts. The first is the point that I just brought up: “protected” means that subclasses can call the function, so if you have a virtual function on a class A that you only want to be called by A, you should really make it private instead of protected. The fact that it’s virtual is enough of a hint that subclasses of A can/should override it; there’s no need to mark it protected in addition. This makes sense to me.

The second point of disagreement is that Sutter doesn’t believe in public virtual functions at all: according to him, your public functions should all be non-virtual, though they can invoke non-public (probably private, by the above) virtual functions to carry out the actual work. In the simplest case, the public function can be a one-liner to forward to a virtual function that carries out the actual work.

The philosophical point here is that public functions provide the interface that the class provides to its users, while virtual functions provide the “customization interface” that the class provides to its inheritors. These are two different things; they should be separated accordingly.

That’s somewhat plausible, but I’m not sure I completely buy it. In particular, if the class in question is solely an abstract interface, I’m not sure that it makes any sense much of the time to distinguish between these two aspects: the public interface exists only to provide customizable behavior, so why not acknowledge that?

There are, of course, some situations where you want to do this sort of trick. In particular, it can be the case that all implementations of the public functionality will normally want to carry out more or less the same tasks in the same order, with only the details varying. In that case, providing a public non-virtual function that calls a sequence of non-public virtual functions is very useful. (This is the “Template Method” design pattern.)

The book also gives further examples where this might be useful – for example, if you decide that you always want to perform some action before and/or after calling the core of the implementation (e.g. instrumenting it, checking pre-/postconditions), then it would be useful to have your function already broken up into a public non-virtual part and a non-public virtual part: you would only have to change the non-virtual part.

There’s some truth to that. On the other hand, in my limited experience, that’s not something I want to do all that frequently; and, in situations where I want to do such a change, it’s not too tricky a refactoring to turn a public virtual function into a public non-virtual function plus a non-public virtual function. (It’s not completely trivial, but it’s not that hard, either.) Furthermore, it’s not like starting with a public non-virtual interface is a panacea: in particular, if you want to change from an implementation with a single virtual function into an implementation with multiple virtual functions (creating a Template Method), then the fact that you started from a non-virtual public interface won’t help you at all.

The place where I’d seen this before is the IOStreams library. I suppose it makes sense there – the authors of the standard want to impose as few constraints as possible on implementors, so this fits into that vein: e.g. it makes it possible for people to ship versions of the library that instrument various calls. And, in general, the less control you have of your subclasses, the better job you have to do of guessing where to put your virtual functions, because it may not be possible to carry out arbitrary refactorings; this is a technique that can help. (Though, as I said above, I’m not sure how much it helps, given the non-Template Method to Template Method conversion example that I mentioned.) But, in general, I’m not convinced.

Still, as Sutter says, it’s so easy to do things the way he recommends that you might as well always do it, given that there can be benefits at times. There’s certainly something to that – I would never dream of having non-private member variables, after all, so even in situations where I really am keeping around data that I allow the user to read and write, I will do that via public member functions. (But I don’t want to stretch that analogy too far – I’m rarely in a situation where a class provides simple functions to read/write a member. Or rather, I’m in that situation more often than I’d like at work, but that’s because we do things wrong at work, not because it’s a good idea.)

For the time being, I’m keeping an open mind on the issue. Even if I were convinced by Sutter’s arguments, I doubt I’d adopt his recommendation immediately, because that would require changing our coding conventions at work, and I don’t think this is an important enough issue to require a change of existing conventions.

On completely different topics:

  • I was playing Paper Mario 2 (about which I will write multiple posts later) over the weekend, when Miranda started humming the main Katamari Damashii theme, for no particular reason. So I spent the rest of the weekend and much of today singing/whistling/humming it myself.
  • I don’t know what triggered it, but I’ve suddenly started getting a new piece of blog spam about every 5 minutes. Sigh. Fortunately, they’re getting intercepted and dumped into the moderation queue, but it’s still a pain.

followups

November 11th, 2004

We bought one of the tables today – a lovely dark purple rosewood one from a Chinese furniture store. So now we’ll actually be able to have more than two people over for a meal.

And the piano was looked after today. It really does look like the piano guy from the store was incompetent – the keys were sticking for two reasons, one of which was a normal occurrence during the breaking-in period but one of which was due to shoddy prep work. And it looks like he screwed up fixing the way one of the notes clicks, too. (Though I won’t count that one as fixed until a couple of days go by, since I’ve seen it “fixed” twice so for only to come back the next day.) Hopefully the guy today really did know what he was talking about; I’m optimistic.

prisons

November 11th, 2004

I just finished Are Prisons Obsolete?, by Angela Davis, and it reminded me how screwed up we are about prisons. I read an article (in The Progressive?) a few years ago which said that the US locked up between 5 and 15 times as many people per capita as countries in Western Europe (admittedly, for most countries, the ratio was closer to 5x than 15x); the book says that the US has 5% of the world’s population but 20% of the world’s prisoners, which pretty much agrees with that figure. Locking up somebody is a really awful thing to do to them, and we’re supposed to go by the principle of innocent until proven guilty beyond a reasonable doubt, yet by the standards of civilized countries, only 20% of the people in our prisons actually deserve to be there.

And California is, of course, a leader in this. Apparently two thirds of our prisons were built in the 80’s and 90’s; we passed an incredibly draconian three strikes law a few years ago; and Gray Davis was not only bending over to do whatever the prison industry wanted, he did whatever he could to lock up people and throw away the key, universally overturning parole board recommendations to let prisoners out. At least Arnold is sometimes letting people out of prison when the parole board recommends it.

And it’s not just the numbers of people we lock up: we do everything we can to dehumanize prisoners once they’re in jail. (And then act shocked, shocked when something like Abu Ghraib comes up.) Huge overuse of isolation, guards treating prisoners like dirt, guards allowing prisoners to abuse other prisoners, setting up the worst sort of Lord of the Flies scenario.

And there’s no reason for any of this. It doesn’t make any sense at all morally: you should never treat people badly just because you have the power, or just for revenge. It doesn’t make any sense pragmatically: the only possible moral justification for prisons is to reduce crime, but if you want to reduce recidivism, doesn’t it make a lot more sense to model good behavior, to improve socialization, to improve education levels?

And then there’s prison labor. I really believe that most people in this country don’t want to be gratuitously mean; but there are so many ways that people can make a lot of money off of prisons, whether in constructing them, running them, or getting slave labor out of them, which skews the debate horribly. Not to mention second-order sources of profit: we want to control the governments of third-world nations and sell lots of weapons, so we invent a war on drugs, which we have to justify by locking up drug users and drug dealers in huge numbers.

Sigh. I’m sure, though, that the Christian fundamentalists in power will improve the situation soon – they so clearly take turning the other cheek as words to live by.

gay divorced tables

November 6th, 2004

We went to see The Gay Divorcee on Wednesday. (The Stanford Theatre is showing a bunch of Fred Astaire / Ginger Rogers movies over the next couple of months.) I’d forgotten that old movies are in a 4:3 aspect ratio, or something close to that – not nearly as horizontal as modern movies. A couple of nice songs and some nice dancing, but I was hoping for more. And the Continental sequence really was over the top – not as bad as, say, a dream sequence from a Gene Kelly movie, but still…

We’re going table-shopping this weekend – now that there are three of us and we have friends with kids, our little round table just isn’t quite up to the chore. We looked at various options today, and found four reasonable options, but all had their flaws – not having a leaf, being more expensive than we wanted, needing a bit more care than is ideal, or in a style that didn’t thrill one of us. I’m sure we’ll buy one of them, but I’m still not sure which. I am glad that we finally got around to going to my namesake store, Carlton Arts and Design in downtown Mountain View – I’ve walked past it dozens of times without going in, not being very interested in the stuff in front (large porcelain vases), but there’s some pretty neat stuff in there. Miranda loved all the chests with their little drawers, there was a very simple vase that I liked a lot, if I were a sake drinker at home I would certainly buy one of their sake sets, and there were some quite nice tables. On the expensive side, but they were better constructed than the others we were considering, so we may well end up getting one of them.

election night

November 2nd, 2004

It’s election night. And what a depressing campaign it has been. I voted for the Green candidate for president (David Cobb), but if California had been close, I would have voted for Kerry: I don’t like him at all, but Bush’s team is evil.

No matter what happens, I’m going to feel guilty: I’ve been almost completely politically inactive since moving to California. Even if Kerry wins, political discourse continues to move to the right. Democrats keep on positioning themselves as slightly better than Republicans; I can’t stand that, but I also haven’t been doing anything to change that.

Which raises the question: what should I be doing? I’m completely out of touch with the local political scene, so figuring out what’s going on here is the obvious first step. Maybe there are local issues that I’ll get motivated out; it would be nice if the local work were to fit into larger strategic themes, though.

I’m not sure exactly what strategic themes I think will be most effective, though. One thing that intrigued me about the election this year was California’s Proposition 62, which was trying to recast our primary systems: all candidates from all parties would be on the primary ballot together, and the top two vote-getters would go on to the main election, even if they were both from the same party. It sounds like a good idea to me: I can’t say I like political parties to begin with, at least with the winner-take-all system we have in this country, and I’m sure that, in a lot of races in California, we’d end up with a more liberal Democrat running against a more conservative Democrat, which I’d much rather see than a Democrat against a Republican. Having said that, I don’t think the proposition has much of a chance – it’s not in the news much, and propositions like this don’t seem to win without making a bit of a splash in the process.

What happened to the full public financing movement? It looked like it was going strong 6 years ago, but I haven’t heard much from it recently. Sounded like a good idea. (Don’t get me wrong, I’m happy to support other things than changing election machinery, but it does seem a good way to get bang for your buck.)

Of course, politics isn’t the only thing that I wish I were finding time for…

refactoring milky video games

October 31st, 2004

My house has been struck by plague. On Tuesday, Liesl stayed home; on Wednesday, Liesl and Miranda stayed home; on Thursday, Liesl, Miranda and I all stayed home; on Friday, there wasn’t anybody else to pass the cold to (the dogs don’t get colds from us, fortunately, and they stay home all the time anyways), but we weren’t getting any better, either. Fortunately, Miranda was more or less on the mend by Saturday, and I think Liesl and I will be able to go to work on Monday. We’ll see.

Does having a cold make you more sensitive to the smell and taste of milk? Because either I’m having incredibly bad luck, or every store in Mountain View is full of spoiled milk, or something. At first I thought something was wrong with the temperature control in our fridge, and that may actually be the case, but milk is now smelling strange to me as soon as I bring it home. And it’s not just me – Miranda has been complaining about the taste, too. Maybe it’s a recent switch from 2% to 1% milk – does milk with a lower fat content go bad faster?

One of the benefits of staying home is that I’ve had lots of time to play games. In my last entry, I talked about how wonderful Katamari Damashii is. And I just want to emphasize that further; I’ve been playing through the game again, and it (and its music) really is fabulous. In fact, I was going to say that it’s the second best video game I’ve played this generation, after Metroid Prime, but looking at my shelves, that’s an exaggeration. It probably is the best video game I’ve played in the last two years, but all that says is how front-loaded this generation is. Hmm: Are generations always front-loaded? I guess I would believe that as a general rule: in the first two years, you see new gameplay ideas that wouldn’t have been possible in previous generations of hardware, while the three years after that give you games that are largely refinements of ideas that you’ve seen before. So early in this generation we got Metroid Prime, Animal Crossing, the Resident Evil remake, ICO, Golden Sun, Advance Wars, and the beautiful short life of the Dreamcast (Shenmue, Jet Grind Radio, Space Channel 5, Soul Calibur). While now the best games, while often quite good (Prince of Persia, SSX 3), usually don’t excite me as much.

I’m finally getting to the end of my first big refactoring project at work. Two or three months ago, I inherited some badly written, completely untested code, so my top priority was to clean it up, so I’d be able to fix bugs in it and adapt it for new situations. It’s been quite an educational experience. One thing that I learned from my last refactoring project is that, if you start your refactoring at the top, then it will probably be very helpful in understanding the structure of the code, but it won’t make it any easier to test – you’ll still be left with big classes, albeit better-structured ones, and writing unit tests for big classes sucks. So I was happy a month and a half ago when I found a couple of nice little classes that I could split off from this code and write unit tests for. And, while doing that, I noticed some bugs in the code, bugs I never would have found without cleaning it up like that. This week I finally got to a situation where the main class involved in this program is small enough for me to test.

Except that the tests were too simple. Which points at a way in which I was misunderstanding the program. This program doesn’t actually do anything itself – it just takes user requests and passes them off to other programs to carry out the actual work. And there are only a couple of important user requests, so it turns out that there are two unit tests at the core of this program. But, while it’s a small program, it’s not that small. And, when I look at the code, I see that the real story is the error cases. If you’re dealing with a program that just talks to lots of other programs, then the question isn’t what happens if everything goes right: the question is what happens if you get queries or responses in an unexpected order, what happens if one of the other programs goes down, what happens if this program goes down and has to try to recover its state.

So I’ll be spending the next two weeks refactoring code with that aspect of it in mind (I’ve already done most of the relevant refactoring, of course) and bringing it under test. And then I’ll move on to something else – there’s another program that I’m in charge of that is starting to cause problems and need refactoring, or maybe I’ll have to refit this program with different external interfaces. (Which will be a lot easier now: that’s the whole point of refactoring, after all, and developments in our business plan make me think that I’m getting this done just in time.)

I really love the way that code development via refactoring works. I didn’t try to understand the whole program at once and come up with a better way to do things. Instead, I started by looking for places in the code that I knew I didn’t like the look of (or the smell of, as the refactoring book puts it), and improving them. Every few days, I’d get sick of fixing some particular thing over and over (or, worse yet, not fixing it), and at that point I’d have enough of an understanding of the problem that I could usually come up with a general solution. Usually the solutions (whether specific or general) weren’t perfect; I’d add a FIXME comment, and a couple of weeks later the surrounding code would have changed enough that the FIXME comment could be dealt with much more easily. For a long time, I had a hard time telling how much progress I was making (and, in particular, when I’d be done), but all of a sudden this week I looked up and realized that the bad smells had greatly receded: they were still there in the fringes of the program, but a strong core had appeared, one which will serve us well going forward.

katamari damashii

October 24th, 2004

I take back everything good I said about Manny’s fielding. Anyways:

Today’s game is Katamari Damashii. (Or Katamari Damacy, as the US version is spelled, though they pronounce it the same way.) It has one, very simple, idea: you roll a ball around, and items that are small enough stick to the ball. As more and more items stick to the ball, it gets bigger, enabling you to pick up more items.

It’s impressive how much mileage it gets out of this idea. Early levels are pretty straightforward: you start as a 5cm ball, say, that can pick up thumbtacks but still gets chased by mice; over the course of the few minutes that the level gives you, you get to where you can pick up those mice and other larger items (books, fruit). And the level is divided up into areas, some of which have no barrier between them, but some of which have a sign that you can only pick up if you’re as large as the height marked on the sign, so the level effectively expands as you get larger and larger.

In later levels, you’re given more time and usually start somewhat larger; instead of picking up thumbtacks and worrying about mice, you start, say, by picking up bananas and worrying about people. And then picking up people, and eventually picking up trees, cars, … By the end, the scope of the game is truly remarkable: the last level starts you off at one meter, and when I finished I was more than half a kilometer in size. You interact with a portion of a level in a completely different way when you’re 1 meter tall versus 3 meters tall (hmm, can I pick up cars now?) versus 10 meters tall (hey, those elephants look promising, and I can get small buildings now) versus 30 meters tall (skyscrapters, giant squid) versus 100 meters tall (oh, they hid some even more giant squid over here; and I can pick up small islands now) versus 300 meters tall (clouds beware). I’m curious how they implemented this, actually – I tend to assume that, when you’re 300 meters tall, they no longer bother modeling bananas, while when you’re 1 meter tall, they don’t bother modeling far-off areas that you won’t be able to get to until you’re lots larger. But such transitions, if they’re there, are completely seamless.

The music is fabulous, too. For a while, I wondered “do I just like any sort of pop music as long as it’s in Japanese?”, but the truth is that the music is great, and I loved the song in English as much as the ones in Japanese. The control scheme works pretty well – they used a two-joystick scheme instead of a single joystick + buttons scheme, and while either way would probably be equally expressive, it is nice to see a game that uses dual joysticks effectively.

It is, of course, a short game – even the best one-trick ponies can only be dragged out so long, after all. (Though I am actually planning to go back and replay some of the stages; they’re fun, and a look at gamefaqs.com suggests that eventually I’ll be able to unlock time-limit-free versions of stages.) But it is priced accordingly: it’s the best 20 bucks I’ve spent on a video game in quite some time.

infinite justice

October 22nd, 2004

I just read Arundhati Roy’s article “The algebra of infinite justice” in her very good and very depressing Power Politics. It was written on September 29, 2001; I’d forgotten that “Operation Infinite Justice” was once the code name for our war on Afghanistan. Anyways, I was struck by the following quote in the article:

The trouble is that once America goes off to war, it can’t very well return without having fought one. If it doesn’t find its enemy, for the sake of the enraged folks back home, it will have to manufacture one. Once war begins, it will develop a momentum, a logic and a justification of its own, and we’ll lose sight of why it’s being fought in the first place.

My first reaction was: right on, look at the mess we’ve gotten into in Iraq. The thing is, though, the truth is even worse than this quote envisions. Yes, the war in Afghanistan didn’t end up finding our enemy; yes, the war in Iraq was manufactured. But the war in Iraq wasn’t manufactured in order to appease “folks back home” demanding a war: instead, Bush had his sights set on Iraq before September 11. There was never was a connection between the two: September 11 simply gave Bush an excuse to fight the war he really wanted.

Sigh.

blog spam

October 21st, 2004

I learned about blog spam today: a spammer tried to add comments to most of the posts here. The bodies of the comments were random quotes; the goal is to get people to click on the spammer’s name, which linked to a web site. Actually, that may not be the main goal: apparently blog spammers mainly use this as a way to try to increase their rank in search engines.

Fortunately, it turns out that WordPress has a way to say “flag a comment as needing approval if it contains one of these words”. So I just entered the web site in that list, and now the spam gets blocked. Yay. But it will be a pain if this sort of thing keeps on happening.

playoffs, round two

October 19th, 2004

Imagine: last Friday, I thought the playoffs were getting boring. 2-0 leads in both series, no reason to think that either underdog had much of a chance. When driving to the restaurant on Saturday, the Red Sox had taken a lead, so I was starting to perk up, but by the time the meal was over, they’d given up 13 runs, with more to come. (The dinner was at the excellent Shiki Sushi, which we went to for the first time. I’m not sure the sushi is quite as good as that at Sushitomi, but that’s a very high standard, and the non-sushi parts of the menu are much more varied. With monthly specials that, this month, included three matsutake mushroom specials and a monkfish liver special. Very friendly staff, nice ambience. It won’t replace Sushitomi as our Japanese restaurant of choice, but if they were both equally far away from our house, we’d probably go to both equally often.)

And then Sunday looked like more of the same – the Red Sox took a lead, the Yankees took the lead back, and that was that. Except it wasn’t that – the Red Sox tied it against Rivera in the ninth, and won in the twelfth. But I still wasn’t convinced; at work on Monday, I saw (well, “saw” – I just was watching icons on a Java applet) the Red Sox take the lead, I saw the Red Sox lose the lead, I thought it was probably over. But then, just as I was leaving work, the Red Sox tied it up; by this time, I was starting to learn my lesson, and was looking forward to listening to them win on my ride home.

The game did not, however, finish on my ride home, or even two hours after my ride home. I won’t go through the blow-by-blow, except to say that Tim Wakefield is one of my favorite Red Sox. The thing is, though, that apparently wasn’t even the best game played yesterday – by all accounts, the Astros / Cardinals game, which basically started and finished while the Red Sox and Yankees were in extra innings, was an absolute classic. Great pitching, of course, great defense (I had no idea Carlos Beltran was such a good fielder), and as dramatic a finish as one could hope for.

And, as I type this, the Red Sox just forced game 7. I never ever expected Curt Shilling to appear again in the postseason; and what an appearance! Nice to see the umps get calls right, too – I actually kind of expected them to get the home run call right, but I’m really impressed that they got the other play right, where Alex Rodriguez knocked the ball out of Arroyo’s hand. I can’t wait until tomorrow…

Side note: I’m really glad baseball switched to an unbalanced schedule last year. The AL West ended in as exciting a manner as possible, with Texas, Anaheim, and Oakland slugging it out (though I would have preferred a different outcome, of course), and, as I learned on the radio today, the Red Sox and Yankees’ game today was their 51st meeting in the last two years. (Splitting the previous fifty meetings equally.) That’s the way it should be.

mplayer

October 15th, 2004

One of the reasons I upgraded my computer a month or two ago was to make it a little easier to easily install new software. Like, for example, mplayer. This is a video player for Linux, and it’s great! I actually learned about it first at work: I needed to look at some movies, and one of my coworkers recommended it to me. I didn’t feel like compiling it myself for Red Hat 8, so I found a spare Windows machine, assuming that one of the standard video players there would handle the format that I needed. Neither Windows Media Player nor the Quicktime player could, though; I found a free player that could play some of them, but not all of them.

But mplayer can. Well, mostly – I sometimes run into a bug (which one can work around) involving, as far as I can tell, movie files that are more than a few gigabytes in size. But that’s not a big deal, and I’m really happy with it on my home computer: I can finally watch Quicktime movies again. Back when video game web sites were new, they posted movies encoded with Quicktime 2; that was an open format, so there were Linux players for it. Quicktime 3, however, used the proprietary Sorenson codec, so for years I couldn’t download videos of games in action. But Quicktime 4 uses MPEG 4, and mplayer handles it just fine. So I can, say, watch a video review of Donkey Konga and decide that, as much as I want to support video games with conga drum controllers, I really don’t think I’d enjoy it too much. (The list of songs really doesn’t appeal to me.) Or if I could only look at screenshots of Okami, I might be curious about the game, but it looks amazing in videos. (Hmm: I wonder if the “okami = wolf / large god” wordplay informs Princess Mononoke as well?)

pikmin 2

October 12th, 2004

I intend to post thoughts here on video games that I finish playing. These aren’t intended to be reviews or explanations of the game, and in particular may not make much sense to readers who haven’t played other games discussed in them.

Today’s game is Pikmin 2. A sequel to Pikmin, obviously, which is a sort of real-time-strategy game with puzzle and adventure elements. RTS is a genre that I have played almost not at all, largely because it’s an almost exclusively computer-only genre that didn’t appear until I switched over to console video games. Which is a shame, if for no other reason than that it’s a hole in my education. And I suspect that I would enjoy the genre – I like non-real-time strategy games, after all. Then again, I like to spend time thinking about what I’m doing during strategy games, so maybe I would find RTS’s more frustrating than enjoyable. Also, I can’t use a mouse on a computer (because it kills my hands), and I suspect the genre wouldn’t work well with a touchpad. If I didn’t have a laptop at home, I would consider switching over to a trackball; that would probably work well as an RTS input device, and other people I know with RSI problems say that trackballs work okay.

Anyways, back to the game. It’s very similar to Pikmin, with only minor gameplay tweaks. The main changes are:

  • There’s no 30-day time limit.
  • There are two new colors of pikmin: white and purple. (Along with another kind of pikmin-like creature, the confused bulborb, that you sometimes control in caves.)
  • Much of your time is spent in the aforementioned caves: there’s about as much to do above ground as in the previous game (maybe a bit less to do, actually), but the gameplay is extended by storing lots of the treasure in caves.
  • There are two characters to control instead of one.

There are some minor changes, too: yellow pikmin no longer carry bombs but are, instead, resistant to electricity; the treasures held by boss monsters at the bottoms of caves usually provide slight upgrades to your leaders; there are a couple of juices you can use to either power up your pikmin or freeze monsters.

So: is it an improvement? Getting rid of the 30-day limit is, on the whole, an improvement, but only a slight one. When playing through the first game, I made it about a third of the way through, decided that I wasn’t making progress fast enough, and started over, knowing the game faster and not saving at the end of a day unless I’d made enough progress. So I had to play several days a couple of times, but on the whole it wasn’t particularly tedious.

The thing is, while getting rid of that resource limitation, they added a new resource limitation to this game. Lots of the game takes place in caves; while in a cave, you don’t get new pikmin (at least through normal mechanisms), so, assuming you don’t want to play through a cave more than once, you have to restart a floor if you’ve lost too many pikmin. And, to make matters worse, there’s no convenient mechanism to restart a floor (or to not save your progress when going down a floor if you’re not sure if you’ve lost too many pikmin): you have to exit the cave, be sure to select that you don’t want to save, then end the day, again being sure to select that you don’t want to save, and only then can you reload. Which is just tedious, and it’s silly for game designers to get stuff like this wrong: I don’t mind gameplay elements that require you to play them over and over – indeed, it would be a little unsatisfying if, say, boss battles weren’t hard enough to make you go through them more than once before succeeding – but if you’re going to do that, don’t design the user interface to make restartng gratuitously difficult.

Another gripe that I have about the caves is that they tilt the balance of the game more towards fighting than in the original. I’m not sure that this game has fewer treasures that require clever thinking before you can recover them, but it’s certainly the case that it has many more treasures that only require fighting to recover them. Along these lines, it’s a pity that using both your leaders isn’t required for more of the puzzles in the game; sometimes, it’s used well, but much of the time the second leader is irrelevant to the action. The extra flavors of pikmin are occasionally cleverly used in puzzles, but honestly it feels like they were added mostly because a sequel has to have something “new” about it. (And to provide another somewhat gratuitous resource limitation, since it’s much harder to get new purple and white pikmin than the other colors.)

Don’t get me wrong – it’s a solid game. There are too many bosses to fight, but usually I enjoyed the boss battles. Another little thing the game did well is to provide multiple satisfactory ending points: you can finish the main goal of the game after playing only about half of it, so if you want to stop there, you’ll still feel like you’ve accomplished something. It looks like, in fact, there are two goals after that, to find Louie again and to recover all the treasures (and discover all the monsters, if you wish). I say “looks like”, however, because I decided to play through the remainder of the game in a way as to leave Louie to the end; I have one and a half caves left, but unfortunately the cave I’m fighting through now is one that exists only to stretch out the length of gameplay. Which is, actually, fine with me – if I were on a limited income, I would appreciate mechanisms for stretching out the gameplay – but, as it is, I’m taking a break for now to play other games. (I might get back to this game; if so, though, I’ll probably exit the cave I’m in halfway through, and try to do the other cave with Louie at the bottom instead.) And Miranda likes it; she’s played through the first few days, though it gets a bit too tricky for her once she can’t avoid combat any more. The first game was solidly designed, enough so to easily support a sequel.

Minor infelicity: bud pikmin really are pointless – with normal gameplay, you’ll never create them after your first few days in the game. They’re not hurting anything, so it’s fine that they’re still there in the sequel, but they were a mistake, albeit a slight one.

mpeg: interlacing

October 9th, 2004

Here’s another fun (?) aspect of MPEG. As mentioned last post, TV (at least in the US) isn’t really 30 frames per second: it’s 60 half-frames per second, where half of the time it shows the odd-numbered lines and half of the time it shows the even-numbered lines. (This is called interlacing.) Since MPEG both wants to play on TVs and wants to be able to encode TV programs, it of course has methods for encoding interlaced content. There are two basic ways to do this: in one, called field pictures, each picture only includes half the data. In the other, called frame pictures, each picture includes complete data, but you can indicate that you want the two halves of the picture displayed at different times, and (if so) which half to display first. (Or you can indicate that you don’t want the two halves displayed at different times; a good TV will support this.) In my experience, frame pictures are used much more frequently than field pictures; I’m not sure why (and, for that matter, I also don’t know if my experience is representative).

But that’s not all there is to the story. Recall that movie content is 24 frames per second. So what’s the best way to play this on a TV? If you think of a TV as being 30 frames per second, then you should probably just display every fourth frame twice. But if you think of a TV as being 60 half-frames per second, then it gets more interesting: you want to divide 24 full frames into 60 half-frames. So, on average, you want to spread each full frame of the movie over 2.5 half-frames on your TV. Now, of course, you can’t actually do that, but maybe you can have the movie frames alternate between taking 3 half-frames and 2 half-frames.

And MPEG has a bit just to support this: repeat_first_frame. If this bit is set on a (full) frame, then the first half of the frame gets played twice. So frames are encoded like this:

  1. repeat_first_frame set, top_field_first set: display this movie frame as Top, Bottom, Top.
  2. repeat_first_frame not set, top_field_first not set: display this movie frame as Bottom, Top.
  3. repeat_first_frame set, top_field_first not set: display this movie frame as Bottom, Top, Bottom.
  4. repeat_first_frame not set, top_field_first set: display this movie frame as Top, Bottom.
  5. Start over.

This is known as 3-2 pulldown. DVD players and TVs will advertise themselves as having special circuitry to detect this case, so they can display it as smoothly as possible.

Of course, it’s still more complicated than this – you can control interlacing both via the progressive_frame bit (in one header) and via the progressive_sequence bit (in another header), and the latter changes the meanings of various other bits. Lots to learn about.

mpeg: frame rates

October 5th, 2004

I’ve been reading through the MPEG 2 standard recently, specifically the video parts. It’s pretty interesting, in a weird way; makes me want to go out and write an MPEG decoder, just to experience that particular sort of complexity, except that doing so would take years with no real reward at the end. I’m used to standards that are either much more straightforward to understand, or if they aren’t (C++), they’re complicated because they want to expand the realm of what’s theoretically possible, and doing that expansion into areas that aren’t well understood.

MPEG is different. MPEG just wants to be able to paint pixels on the screen, and have those pixels change over time. It is, of course, trivial to come up with a format that allows you to describe this. But you need the resulting files to be small enough to fit on disks using current manufacturing technology, or to be streamed over the air using currently available transmission technologies. You need them to be able to be decompressed quickly using current hardware (which can, admittedly, be custom-made for the purpose). You need them to play on current TV’s, not just current computer monitors. And you need them to be good at compressing certain specific kinds of content, like movies and TV shows. I am not used to formats in which all these challenges come together this way.

Take frame rate, for example. If I were designing a format, I’d just say “make it an integer, the number of frames per second”, and if pressed, my first reaction would be “computers usually use 32 bits these days, so let’s go with that”. But of course we don’t need movies that contain 4 billion frames per second, so I would admit that only 16 bits, or even a lowly byte would probably be adequate.

But, actually, MPEG uses (if I’m remembering correctly) 4 bits for this. They don’t do this by limiting the frame rate to 16 frames per second – instead, they only allow a few specific frame rates, so 0001 stands for a certain frame rate, 0010 for another, etc. MPEG cares about TV, so one of the allowed frame rates is 30 frames per second. Except that TV actually shows 60 half-frames per second (because of interlacing), so MPEG also allows 60 frames per second. And MPEG cares about encoding movies, which are at 24 frames per second, so that’s another value. And it’s a world-wide standard, while the 60 half-frames per second is an NTSC thing, so it allows 50 and 25 frames per second for PAL content.

But that’s not the whole story. In fact, TV isn’t 30 (or 60) frames per second – it’s 29.97, more or less. The story here is that TV used to be black and white, of course, so only brightness (“luminance” is the term used) was encoded. Then, to add color, they stole a little bit of bandwidth by (among other things, I think) squeezing the frame rate ever so slightly, to 29.97 (or 30,000 / 1001, I think) frames per second. (This also means that TVs expect much less color data than luminance data, which also leaves relics in the MPEG standard, but never mind that.) So MPEG can be at exactly 24, 30, or 60 frames per second, but it can also be at 23.97, 29.97, or 59.94 frames per second. But for 25 or 50 frames per second, only the exact values are allowed, because the historical reasons only apply to NTSC, not PAL.

(Side note: Manny! Good start to the playoffs.)

no playoffs

October 2nd, 2004

The A’s neglected to make the playoffs this year. They just had to win the series this weekend with the Angels; they’ve already lost the first two games.

I never seriously expected this to happen, though admittedly the A’s haven’t been overwhelming this year. Their bullpen started the season badly (I always expect Billy Beane to steal cheap bullpen help, but he didn’t manage that this off-season), and while it stabilized, it never was dominant, and had some noticeably failures at the end of the season. (Today, for example.) The offense has gotten surprisingly decent contributions from many people; it wasn’t stellar, but it was at least respectable, which was better than I’d expected.

The starting pitching was the big surprise this year. At the beginning of the season, Zito was awful and Harden was bad, but actually I wasn’t too surprised to hear that, and Hudson and Mulder were excellent, Redman was decent. And Zito and Harden picked it up nicely towards the second half of the season, especially Harden.

But Hudson was somewhat off towards the end of the season, and Mulder was downright awful. The A’s don’t admit it, but I have to assume Mulder was hurt. With that, their starting pitching was mediocre, and they didn’t have any real strengths (except maybe their defense), and it’s hard to make the playoffs that way.

Wait till next year, I guess. Probably several hitters will regress; Crosby and Chavez could both easily get better, though, and Harden and (maybe) Blanton could contribute to the starting rotation. Right now I’m not too optimistic that the A’s will get much better next year, but I don’t see why they should be much worse. And maybe Billy Beane will do something clever this off-season.

Go Red Sox, I guess. And maybe the Giants will make the playoffs and I can root for them, too.