[ Content | Sidebar ]

bully

January 8th, 2007

Bully is the latest free-roaming game from Rockstar, the makers of the Grand Theft Auto series.

A quite pleasant experience. Lower-key in many ways compared to the only exemplar of the latter that I’ve had the pleasure of playing. A smaller environment, not as long a game, much less over the top in terms of violence. Nary a car-jacking to be found; you might be able to steal a bike, but you have your own parked in garages, and running and skateboarding are both quite workable forms of transportation. Not by any means a paean to sweetness and light – no guns, but a fair amount of punching, and while your character isn’t as ruthless in his search for power as the GTA:SA protagonist, he doesn’t exactly shy away from the idea, either.

All of which has its plusses and minuses. I think I spent almost half a year going through GTA:SA; it was worth it, but that’s not the sort of time commitment I’d want to make very often. The map is a nice balance between having quite a lot of twists and turns to explore but still being entirely manageable if you have to get from one side to another. (No need here for the three cities division of the GTA:SA map.) Not quite as many gameplay options as GTA:SA; then again, some of the latter’s options (flight training) sucked. And being gun-free is more or less strictly a plus for me; I’ve been known to quite enjoy games that are full of shooting, but I’m quite happy to have my distance weapon usage be relatively rare and largely limited to the slingshot. (Though others make appearances – the potato gun, bottle rockets, water balloons.) Individual missions are, on the average, shorter and less challenging.

Having said that, it’s a much less rich game than GTA:SA, and the highs are definitely not as high. The plot is interesting enough but with less depth; there isn’t much in the way of interweaving story lines; the mission tree doesn’t have as many parallel branches; there’s nothing in the game that compares with, say, GTA:SA’s radio stations.

What else? Classes are a nice addition to the gameplay mechanic: pleasant challenges (at least once I figured out that, in shop class, it’s important to start doing what they say as quickly as possible) that are quite different than the regular missions, a nice leveling-up mechanism, and if you’re not in the mood, you should be able to play hooky without getting caught. And I ran out of classes at about the time that I got tired with them. I do wish that I’d taken part in more of the other non-mission challenges – the races are a bit boring (in particular, the bike races are way rubber-banded, to the extent that nothing but the last 30 seconds or so matters) – but many of the “do random things for strangers” tasks are pleasantly fun. (I’d avoided them since the early school tasks like that seemed both boring and (at times) gratuitously mean.) Nice to try to fill up the yearbook with pictures. (But annoying since you don’t have a way to reliably learn other student’s names – who is this Justin person that I’m looking for?) Compared to GTA:SA, there aren’t particularly many important third-person characters, and they’re not drawn very richly; the flip side is that there’s a manageable number of fellow students (50 or so) whom you frequently encounter even while wandering through town, so there are always familiar yet distinctive faces around, as opposed to a mass of anonymous yet repetitive strangers.

Definitely a good choice; I’m quite pleased with it. I haven’t played enough free-roaming games to get tired of the genre (they’re expensive to develop, I suppose there aren’t that many out there), but even if I had, I suspect that I would like Bully. It may not be hugely ambitious, but it’s not excessively imitative, either: it has its style, it has its design choices, they work well.

curious about queueing theory

January 5th, 2007

Now that I’m seeing queues everywhere, I’m getting curious about both the underlying math and the underlying pragmatics. Take a highway, for example: say you want to get the most use out of one. What does that mean? I guess it means maximizing total throughput, or more specifically the car miles driven on the road in some time period. Take the limit as time goes to zero, and the instantaneous version is the sum of the speeds of the cars. Or: the number of cars times the average speed of the cars.

Question 1: Have I missed anything yet? I’m probably near the edge of missing something by doing a measurement at one instant in time: apparently a lot of the fun has to do with the distribution of entry times into queues. Let’s put that in the back of our head for now and continue.

So we’re worried about the number of cars times the average speed of the cars. The second factor sounds easy to deal with: floor it! What about the first factor? We have a fixed amount of road space (another simplification, but one I’m happy with for now); and the number of cars is the road space divided by the distance between (fronts of) cars. (We could separate “distance between fronts of cars” into “length of cars” plus “following distance”; for now, let’s not worry about the length of the cars themselves.) So now we want to floor it while tailgating! (In our Mini Coopers, if we’re worrying about lengths of cars.)

Which I would, actually, rather not do. Nor would all of my fellow drivers, though some don’t seem to mind. In general, the faster we go, the larger a following distance we like to maintain. So the two components are fighting against each other. (Good thing, otherwise this problem would be pretty boring.)

Question 2: What’s the relationship between driving speed and following distance for your average driver?

If your average driver’s brain is concerned with being able to react to events in a fixed amount of time, then following distance would vary linearly with speed. (So the throughput wouldn’t vary with the average speed, once you get dense enough!) If your average driver’s brain is concerned with being able to come to a complete stop, then the following distance actually varies quadratically with speed, so the slower the highway, the higher throughput. (I don’t really believe that, though, and here we do start having to worry about the lengths of the cars themselves.) In either case, I’m sure there are boundary effects. And brains are complicated things and developed in an environment where people normally travel at single-digit miles per hour, so probably neither model is particularly accurate.

Now I really do want to start worrying about variations in the instantaneous behavior: now that my commute has me driving home on 101 at 5:45 pm, I can assure you that the speed of traffic varies from second to second. (Those annoying other people who also want to get onto the highway have something to do with this.) This raises so many questions that I don’t even know where to start:

Question 3: Where should I start when thinking about differences in traffic speed?

Let’s try: what affects variations in traffic speed? When traffic speed varies a lot, is there some sort of pattern in the chaos that ensues, or is every traffic mess different? What effects do variations in traffic speed have on throughput? In particular, what can we do to maximize throughput? What happens when we get close to maximum throughput?

And now we turn to my own behavior. I confess that there are times when I don’t maximize my speed, even when I could do so safely and legally. For example, if I see a slowdown ahead of me, I tend to take my foot off the gas and coast, rather than, say, first maintaining speed until I get nervous and then braking more sharply.

Question 4: Compared to a leadfoot, am I hurting, helping, or neither?

Not clear. I’m not affecting my average velocity: I’m decelerating more gradually than our hypothetical leadfoot, but you get the same total deceleration in either situation. Having said that, my position on the road is never ahead of where the leadfoot would be, so maybe I’m delaying not only myself but an entire column of cars behind me. That would be unfortunate.

On the other hand, while I don’t know what the causes and effects of variation in velocity are, I have a hard time believing that yo-yoing speeds really maximize throughput, or even don’t have a negative effect. So maybe, by providing some modest dampening effect on the system, I’m actually helping throughput? It would be nice, but I wish I could point to a concrete mechanism here, could present a model where my behavior helps instead of hurts.

Lots of questions I don’t understand.

Question 5: Any good books on the subject?

Question 6: Is this traffic situation a good analogy for any aspect of software development, or are the behavior of queues that I run into at work different from the behavior of queues that I run into on the way home from work? (Encounter on my way home, I should say – I try hard to not actually run into the queues on the road, because I’m quite sure that exchanging insurance information would not help throughput in any way.)

wii update

January 1st, 2007

I seem to have been letting Wii experiences build up; time for a dump. I’ll probably forget some things, but hopefully I’ve remembered most of the things I want to say.

  • I’m not planning to go out and buy more minigame compilations, but I’m definitely glad I got Rayman. The minigames vary in quality, but are almost all fun. Some are pleasantly wacky (I like the rabbit boy choir where you have to figure out which one is singing out of tune at any given point), and the on-rails shooters that are the end stage of most levels are really good. I’m not going to turn into a devotee of the genre, but the Wii controller works exceptionally well with it: just point and shoot. It’ll be interesting to see how good FPS’s (and variants) work on the machine; looking forward to Metroid next year. Another pleasant control experience there was reloading by shaking the Nunchuk; I’d been expecting to control games by pointing, by tilting, by swinging, but the idea that you could trigger an action by simply giving the controller a gentle shake had never crossed my mind, and it works very well.

    So that’s the games from the point of view of gameplay, but what really makes it is the presentation. Hard to say what’s best: their varied use of plungers? The deftness with which they wield a feather duster? Their screams and excellent dance moves (too many examples to post to here)? I didn’t watch trailers for it before buying the game, but now I’m hooked on them.

  • We finally got more controllers, which Miranda, my father, and I put to good use with Wii Sports. Bowling was pleasant enough; a bit weird to be bowling without a weight at the end of your arm, but we liked it. Miranda got frustrated by golf, but I expect to go back and play it on my own. (I wish it were easier to tap the ball, though: gentle swings often don’t register at all.)

    And then we got to tennis; my oh my. There were three of us playing, so it set us up as doubles with player 1 controlling both characters on one side. Player 1 was Miranda, using the Zippy mii that she’d created. And that was really silly, really fun, I was grinning the whole time watching us swing wildly at the shots, dive across the screen, and basically act ridiculous. (We, fortunately, managed to avoid hitting each other with the controllers; we did hit furniture a few times, but not too hard.) A good time was had by all.

  • They’ve updated the system with a web browser and a weather channel. I’ve played with the former a bit; works pretty well for a web browser on a game console, but I’m not about to use it with any sort of regularity. The weather channel has some definite flaws: the current temperature is several hours old, in particular, but there aren’t as many cities listed as I’d like and the pictures aren’t so good. Having said that, I’ve wasted more time than I’d care to admit going to the globe view, spinning it around, looking at the weather in various places (Antarctica! But Anchorage is rather colder, at least this week: one day with a predicted high of -19F and low of -43F. Ouch.) And it’s not just me: Ravi and Alice were over a couple of days ago, too, and they were amused by the weather channel as well. I like the physics model that spinning the globe uses, too.
  • Other games I’ve played: Super Monkey Ball sucks: an amazing number of small flaws, awful awful music, and neither using tilt sensitivity instead of a joystick nor adding jumps is an improvement. Zelda is good but not wonderful.

ruby notes 5: sql libraries

January 1st, 2007

One of the things I need to do in Ruby is read and update data stored in an existing SQL database. Not wanting to reinvent the wheel, I thought I’d look at existing libraries that provide this functionality. The pickaxe book didn’t give anything useful, but I saved some posts in a newsgroup thread on the topic a couple of months ago, and looked through them.

The libraries that the thread referenced largely fell into two categories. Some of them, like Ruby MySQL, provide quite low-level access to a database. There are classes to wrap some basic constructs (result sets in particular), but you’re pretty much grubbing with the SQL code and data directly.

Other libraries were polar opposites. The most extreme there was Active Record, from Ruby on Rails. In that library, not only is everything abstracted away into Ruby objects, but the Ruby objects themselves actually drive the database structure.

Active Record sounds great to me: everything is defined in one place, and what could be nicer than having refactorings in or changes to your code propagate themselves to your database layer? It is famously known as “opinionated programming”, meaning that its author has ideas about how to structure your database, and if you don’t agree with those ideas, the library isn’t for you. I have no problem with that: I approve of prioritizing clean code over unneeded generality, and I have no reason to believe that I would disagree with the author’s opinions, were I better informed.

Having said that, I decided not to go with Active Record for now. For one thing, I have an existing database schema that I don’t want to modify right now. I don’t know for sure if Active Record would get along with it well, but I suspect that it wouldn’t, and I’m afraid that it would be easy to screw up my database (which has other users that I want to preserve for now) by going with Active Record. Also, I’m doing this for fun and for didactic reasons; reusing others’ code isn’t essential for either of those, and it wasn’t clear to me that going with Active Record would further those goals in this particular case. (I am keeping my ears out for new programming projects that would give me an excuse to use Rails, because I do want to learn about that at some point.)

I think that the level of abstraction that I really want is something more along the lines of Java’s JDBC. To be sure, there were many things about JDBC that annoyed me, but at least it was database-independent and had a few useful abstractions built on top of the bottom level. (Using updatable result sets to add new rows and modify existing rows, for example, which I would have to do by hand-crafted SQL code with the Ruby MySQL libraries, as far as I can tell.) I wouldn’t mind something a little more abstract than that, but I still have more to learn about SQL, and I’m in the mood to not have too much going on behind my back, so this level of abstraction seems appropriate.

Maybe there’s a Ruby library out there that can do what I want at that level of abstraction, but I didn’t see one. And, the more I thought about it, the more it seemed like fun to build a level of abstraction like that (narrowly tailored to my needs, of course) on top of a low level library. It seemed like the sort of project that might help me get my hands dirty with aspects of Ruby that I wouldn’t learn about by, say, implementing algorithms, and getting my hands dirty with SQL occasionally wouldn’t hurt, either. So I think I’m going with rolling my own.

One thing that bothers me, though, is being tied to MySQL by my choice of low-level library. That just seems wrong from a philosophical point of view; from a practical point of view, it could interfere with testing. One problem I’ve had when writing code interfacing with SQL is that it largely restricts me to acceptance tests instead of unit tests for certain kinds of code, because running tests takes so long. (Seconds instead of milliseconds.) The obvious solution to that would be to connect to a faster database (hopefully a purely in-memory one) for purposes of unit testing. So it would help a lot if there were some intermediate layer that gave me database-independence. I guess I should look into ODBC to provide that intermediary? And maybe SQLite can be the fast database that I need? I’ll do some research.

In the mean time, I’ve started writing some client code for my hypothetical SQL abstraction layer, to give me an idea about what sort of interface I’d like.

ruby notes 4a: overloading constructors

December 31st, 2006

An addendum to my note from earlier today: the constructor problem is more annoying than I’d thought, because (as I just discovered!) the static constructor method is harder to carry out than I’d realized. After all, the static constructor method still has to create an object of the type in question, which means that it has to (indirectly) call that object’s initialize method. (Well, it probably doesn’t have to, this being Ruby, but that’s another matter.) Which, in turn, means that the initialize method has to allow you to set up the state in a sufficiently general way to make that easy for your static method. And that means that it probably has to expose your object’s internal state pretty directly. But that may not be the way that you want other users to construct your objects, putting you in a bit of a bind.

I’m pretty sure I don’t yet understand all the options and tradeoffs here. I’ll have to do a bit more reading and thinking, I guess. (And then probably ask on the newsgroup.)

ruby notes 4: overloading

December 31st, 2006

I miss function overloading. I can see why they left it out of Ruby: overloading based on static types is, of course, right out, which only leaves us overloading based on the number of arguments. And even that has a bit of a staticish feel to it, and, what with Ruby’s nice varargs handling (nothing revolutionary, obviously, any Lisp programmer will be familiar with it), isn’t necessary. Besides, you can just call your different versions of the functions by different names, can’t you?

Well, you can, but that doesn’t necessarily make your code any clearer. Which is, for me, a largely theoretical objection right now; a more practical one at the moment is: no, you can’t always. I wrote my first two functions that overloaded themselves based on the number of arguments yesterday (using varargs plus a conditional on the size of the extra arguments array). One of those functions was initialize, which is magic. I won’t argue in favor of having zillions of constructors, but I don’t think it’s unusual to want to have a couple of different constructors for a class. There are, of course, ways around this – static factory functions are another obvious choice, and are quite appropriate in many circumstances – but I sometimes like just defining a few constructors.

And the other function that I overloaded based on the number of arguments was the array operator, []. (Plus the array assignment operator, []=.) Again, I could simply have renamed one of my overloads – I wouldn’t blame my readers for wondering what I have in mind that I might want to sometimes use one index to reference and sometimes two – but I don’t think that doing so would have made uses any clearer.

Which brings me to the other use of the term overloading, namely operator overloading. So far, I really like what Ruby does here. It doesn’t go as far as C++ does: it doesn’t let you overload absolutely everything. ||, is one example, but more interesting is !=. After all, a != b should always be the same thing as !(a == b), so why not simply enforce that? They don’t enforce all such restrictions – I believe that you can overload < and > separately, for example (though, if you’re defining those, you almost always want to just define <=> and include Comparable), but they made a sensible choice as to the restrictions to enforce.

Even more fun is the way they handle assignment: you can overload a.foo = bar by defining a foo= operator on a‘s class. This is great – at one stroke, it means that you no longer have to feel guilty about exposing member variables. To be sure, the language doesn’t let you expose them directly – they’re always private – but, if you have a member variable @foo that you want to expose, you can do attr_accessor :foo. (You can also use attr_reader or attr_writer if you want to expose it read- or write-only.) This defines foo and foo= methods that get and set your member variable; if you later decide that you want non-trivial read and write methods (perhaps removing the actual member variable entirely), you can replace that with appropriate definitions, and your users will be none the wiser. All of a sudden, a certain ease of implementation versus cleanliness / ease of extention tradeoff vanishes: you type as little as you need to at the time, and don’t have to worry about that posing problems down the road.

Off to write some more code…

ruby notes 3

December 30th, 2006

[I suspect I’ll be writing a fair amount about Ruby, and am too lazy to come up with clever names. And I don’t want to rename old posts, so I’m retroactively declaring this to be Ruby notes 1 and this to be Ruby notes 2.]

I just learned about creating arrays of strings using %w{}. Handy, that. I mean, it’s only a few keystrokes, but why not save keystrokes?

I also got around to running my tests with warnings turned on. The warnings suggested that I add parentheses in one place, which I was happy to do. The other thing the warnings suggested, though, annoyed me. As I mentioned yesterday, I wrote a mixin module which cached a value in an instance variable. When I turned on warnings, though, it complained about my referencing an uninitialized instance variable. I assumed (and still assume) that testing an uninitialized variable against nil is safe, but I like having my code warning-free. So what am I to do?

What I ended up doing is not caching that value: it was an untested optimization to begin with, and once I took the time to benchmark it (and a few variants which the interpreter was happier with), it proved to be useless. (At least it wasn’t a pessimization…) So now my code is a bit cleaner, and I was wrong to stick in that optimization.

Having said that, there are other situations in which such an optimization would be the correct thing to do. And this does point at something I’m not thrilled with about Ruby: inheritance and initializing state. For one thing, the whole inheritance picture is a bit muddled: mixin modules have their uses, but I’m not at all convinced that plain old inheritance isn’t a better idea. (Having said that, I also suspect that mixin modules enables useful hacks that can’t be done with some variety of multiple inheritance, I’m just not sure what those hacks are.) Setting that aside, in either case the superclass (or module) can’t set up its state appropriately. In the module case, it’s hard to set up the state at all, as I’ve learned; in the superclass case, constructors (a.k.a. initialize) don’t automatically chain. And while I’m willing to trade off destructors for garbage collection, not having proper constructors bugs me.

Oh well; something would be wrong with me if I couldn’t quickly find something to be annoyed by in any programming language.

Another thought that my first optimization suggests: I could probably write a generic attr_cached function that would turn any function that always returns the same value (for a given instance) into one which caches that value after the first time. That would be fun. Now I’m sad that I don’t have a reason to do that. :-( I suppose I don’t really need a reason, though, do I?

reflection

December 29th, 2006

I was going to write about Ruby and SQL, but I’m having fun doing other Ruby-related stuff this afternoon, so I’ll write about that instead.

I was writing this unit test, for a class DeveloperWriter. And I got tired of typing DeveloperWriter.new("arg") all the time. (Actually, I got tired of typing new DeveloperWriter("arg") and then being reminded that that isn’t valid Ruby, but never mind that.) So I added a create function to my test class which calls the appropriate new for me: I save fifteen keystrokes each time, and after a few object creations, I’m ahead.

And then I wrote tests for another class, and did the same thing. The third time, this got sort of boring. Hey, maybe I can use reflection to do this for me in a generic fashion? I’m using consistent names for my tests, I should be able to get my hands on the class under test?

So I needed some sort of testcase helper class. Or helper module? Let’s go with class: it can inherit from Test::Unit::TestCase as well, getting rid of that duplication as well. Alas, a bit of experimentation showed that that doesn’t work, or at least not easily: if your actual tests inherit from some intermediate class which inherits from TestCase, then the magic test runner stuff doesn’t work. I took a look through the source code, but that didn’t enlighten me: I couldn’t figure out how the magic stuff works at all, and to the extent that I can figure things out, it doesn’t seem like there’s an obvious workaround. So I’ll save that for a later possible improvement.

Mixin module it is, then. I hope I’ll figure out soon when you put utility functionality in a class (to be inherited from) and when you put it in a module (to be mixed in). The next step: can I get my hands on the Class object representing the class under test? A bit of playing around with irb (yay interactive interpreters, it’s been a while since I used one for anything other than Emacs Lisp) suggests that I should be able to find the name of the class under test by doing self.class.name.sub(/^Dbcdb::Test/, ""), and get the test class itself by doing ::Dbcdb.const_get(self.class.name.sub(/^Dbcdb::Test/, "")). (The regexp usage seems a bit overkill, but it actually paid for itself pretty quickly by catching some errors that would have been more confusing if I hadn’t had a nil floating around. Still, I’m surprised there isn’t another way to remove an initial substring; I’m probably missing something.)

By then, I was getting into the spirit of things: new isn’t anything magic, it’s just a method on the corresponding Class object. And I’ve just found that object, so I can write my generic create function. And hey, it works!

In a fit of doubtless premature optimization, I decided it seems a bit silly to look up the class object over and over again: can’t I stash that in a member variable? It’s apparently considered a bit gauche for modules to have member variables, but it is possible. But where do I initialize it? An initialize method doesn’t seem to do the right thing. I could just initialize it in the body definition, but when I tried that, the wrong class got looked up. I ended up with an accessor function that caches the value; not my favorite style, but it works. Again, maybe there’s something I’m missing.

And, now that I’ve got this helper module, I can throw in other stuff, too. For example, my tests also each have an assert_bad function (speaking of bad, that’s a lousy name, I should rename it) that tries to construct an object and tests than an ArgumentException is thrown. (It’s one line long, as opposed to its five-line Java variant.) So now I can pull that up to my helper class, too. And I have more such ideas in mind.

That was fun, and my tests are more expressive: the test code is almost all assertions, instead of generic helper functions.

Not much else to report yet today. I did get to use regular expressions one other time, replacing 26 lines of Java code with 3 lines of Ruby. (Hmm, maybe I’ll eliminate one of those lines by doing a parallel assignment.) Poking around, there seems to be a java.util.regex package that could have helped slim down the Java code, but it still wouldn’t have been as nice. And Java just doesn’t encourage you to use regexps with the same abandon. (Not that this is unique to Ruby by any means – it’s just a scripting language thing.)

And I can feel the static typing habits just drain away…

tv news

December 28th, 2006

I spent much of the Christmas break in a house where news was frequently being watched / listened to on TV. (CNN mostly.) Not a pleasant experience; why I’m supposed to care about teachers having sex with students, or people who apparently falsely confessed months ago to murdering glamorous children, is beyond me. And people have the gall to complain about video games?

first ruby experiments

December 27th, 2006

I wrote my first Ruby code yesterday. It was a port of a date wrapper that I wrote in Java for dbcdb: its only job is to convert to/from written representations, and to have some special dates representing “I read this once, but I don’t remember exactly when” and “I’m in the middle of reading this now”.

It took me an embarrassing amount of time (10-15 minutes?) to get the first do-nothing unit test working, but after a bit of flipping back and forth in what I’m apparently supposed to refer to as the pickaxe book, the test worked fine. From then on it was smooth sailing.

Initial reactions:

  • Unlike Java, there was a builtin date class that did almost everything I wanted to, so no need to hand-write parsing code. (And, as a bonus, I get a rather more flexible parser.)
  • The builtin date class’s output functionality didn’t quite do what I wanted, so I took that as an excuse to write that functionality by hand. I think I wrote it four separate ways, as I found different formatted output mechanisms; for now I’m doing "#{::Date::MONTHNAMES[month]} #{day}, #{year}", but I may well change that again. (Hmm, one bit of that is calling out for an Extract Method, isn’t it? Let me make that change right now…)
  • It was very nice to have the source code for builtin classes lying around, both to find what method to call on Date to do what I want and to give examples of how to write something. (E.g. formatted output.)
  • Mixins are cool: I just had to add four lines of code to get all sort of comparison functionality.
  • The builtin unit test framework is pretty cool, too: I like how it, by default, runs tests for you when you load a test’s source code, and how you can aggregate your test classes into a suite by just requiring one test class after another.
  • Having private mean “private to this instance” as opposed to “private to this class” is an interesting design choice. So far, the only ramification is that I’ve had to make one method protected; that seems like a reasonable tradeoff.
  • My Ruby date wrapper is a third the length of my Java date wrapper. (The Java version has one or two minor pieces of functionality that the Ruby version doesn’t have, admittedly.) Having said that, I’m not sure this is a very good comparison, because it says more about the vagaries of the respective languages’ builtin date classes than anything else. And I’m not using much of the real power of Ruby – no blocks yet, for example. Still, shorter is better.
  • What is perhaps more interesting is that every method in the Ruby version is either one line long or consists of a case statement where every branch is one line long. (Should I Replace Conditional with Polymorphism? Not yet in this case, I think, but it’s a thought.)
  • I’m still trying to get a feel for where it’s most stylish to include (or not include) parentheses. For now, I’m not including very many.
  • The choice of M-backspace in Emacs ruby mode to do something other than delete the previous word is, shall we say, idiosyncratic. In general, ruby mode isn’t that great, but I can live with it.
  • The documentation that I’ve found can most charitably be described as haphazard. I managed to get the gem stuff setup in my local directory, and the gems documentation was reasonably helpful in that regard. Then I wanted to install the mysql gem. It gave me a choice of five versions; the latest one (2.7.1) mentioned Windows, which seemed strange, given that it knew that I was running under Linux. So I skipped that one, and went to the previous version. (2.7 – but the project’s web page says it’s up to 2.7.4?) It wanted to compile some C code (pretty cool, nice to have that automated), but couldn’t find the appropriate libraries. It did give me an error message listing possible configuration options I might want to give, one of which looked right, but didn’t tell me how to pass that option to the install script! (Easy enough if I were doing it by hand, but I wanted to do it within the gem framework.) And the gem documentation didn’t tell me how to do that, either. Eventually, a random web page turned up the trick, but the experience didn’t impress me too much.
  • My favorite documentation, um, quirk, though, is the Og rubydoc page saying “You can find a nice tutorial at www.rubygarden.com“. I’m sure I can, but how about a direct link to help me along? (Searching on rubygarden for ‘og tutorial’ didn’t help, though ‘og sql tutorial’ did the trick.) It then continues “Be warned that this tutorial describes an earlier version of Og. A LOT of new features have been added in the meantime.” Yes, well, then maybe you guys could help out a bit? The rubydoc also had a lovely list of features, with no guidance as to how we might use them: I might want something that “Can ‘reverse engineer’ legacy database schemase[sic]” but I’m not in the mood to troll through random crap to try to figure out how.

In general, I’m happy, but I’m hardly using the power of the language yet. The community around the language feels a bit raw, for better or for worse – little documentation, many abandoned libraries, no clear winners in many important spaces. That’s okay, though; given what I’ve seen of source code, I should be able to figure stuff out myself, and I’m happy to contribute documentation improvements, too.

I have more thoughts on Ruby and SQL, but this post is already long enough, so I’ll start a separate post for that.

podcast queue management

December 18th, 2006

Sorry for the lack of posts. I might have a post stuck in me, or I might just be getting lazy, or might not be thinking enough; hard to say. Maybe I’ll get unstuck over the holidays. Anyways, I present another banal application of lean to everyday life:

Using my mad queue-management skillz, I’ve finally gotten caught up on my podcast listening: for every podcast that I regularly listen to, I now have either no or less than one week’s worth of episodes of any podcast, and have been in that blessed state for about a month by now. It doesn’t hurt that, since changing to the Menlo Park office, my new commute is a bit longer, especially going home – grr 101 grr – but I was heading in that direction even before the office move. In fact, some days, I don’t have any podcasts stored up to listen to, which leaves me at a bit of a loss. Especially with the holidays coming up, when some podcasters have the nerve to take a bit of time off.

Which means that I should find more podcasts to listen to, right? No! (This is how you can tell that I have mad queue-management skillz.) You see, I know about the virtues of maintaining a bit of slack in the schedule. I’ve reached a level where I can predictably listen to all of my podcasts almost every week – some episodes might stick around for two weeks, if a bunch of podcasts that publish on irregular averaging-to-about-once-a-month schedules all happen to arrive at about the same time, but it doesn’t get any worse than that. But I’m not too far away from my listening capacity; and, once I edge up to that capacity, my response time will go through the roof. I’m at that state on my magazine subscriptions, and it isn’t pretty – who knows when I’ll get around to reading those saved up NYRSF issues? And bye bye Granta subscription. So I don’t want that to happen with podcasts, and for all I know the next podcast could turn out to be one too many.

To be completely honest, that wouldn’t be the end of the world – I could occasionally delete an episode without listening to it, if my queue got bad. But that’s just not the way my psychology works: I’m a completist at heart. Also, I like the podcasts I’m listening to now, and I don’t want to delete any episodes of them. And they’re a nice mix: a little more than half music, of various sorts, but also several interesting non-music ones. At the very least, now that I’ve gotten my queue well under control, it’s time to re-evaluate the situation, and see if adding more podcasts is the best thing to do with the gaps that are opening up in my listening schedule.

And I’m pretty sure it’s not. If we think of this in terms of competing queues, then I’ve gotten the highest-priority queue in this area under control. But doing so makes me aware of two other queues that I can now consider dealing with in the same area. Namely:

  • Listening to music that isn’t from podcasts. The Naxos classical music podcast is an incredibly effective advertising tool: something like a third of the episodes make me want to go out and buy the album in question. I’ve gotten other interesting music suggestions from other podcasts, too, and I wouldn’t mind going back and listening to some of my CD collection again.
  • JapanesePod101. You see, I lied above when I said that I’d caught up on all of my podcasts: I’ve caught up on all but one, but on that one I’m 9 months behind. Which wouldn’t be too bad if they published once a month, but since they publish every day, I have my work cut out for me. I am consistently managing to not fall further behind on it now, but I would like to eat into the backlog. And it’s a really good podcast, so worth spending some effort on. I am a bit worried about burning out, but I have enough experience with (effectively) force-feeding myself knowledge in the past that I think I’ll be able to catch the warning signs before things get too bad.

So: one queue under control, two other queues revealed. All good fun, no? In fact, I think I’ll attack one of them right now by ordering a CD to listen to. (Update: no, I’ll break my lean vow and order two CDs. But one’s an EP, and they’re not available from Amazon, so it’s a bit easier for me to batch the order.)

If only queues at work were getting under control so well. Actually, several are, but there are two that are bedeviling me right now. I hope that we’ll make some progress on those during January…

fellow diners

December 16th, 2006

We had dinner at the excellent Sushi Tomi tonight. Two of the tatami tables were taken over by a birthday party, hosted by a young caucasian girl, with a Bob the Builder theme. Multiculturalism at its best.

ipod, incremental (un)improvements

December 10th, 2006

I lost my iPod nano a couple of months ago; oops. Which gave me a chance to see some of how the iPod has evolved: I had to use my old iPod for a little while so I got reminded what the previous generation was like, and then I got to experience the newer model nano.

Some of the improvements between generations are obvious: the nano (either model) has flash memory, easily fits in your pocket, has a much better screen. I’d kind of forgotten how huge my first iPod was, which is ironic given that I was impressed how small it what when I got it. I’d also forgotten about other changes. My favorite improvement is that, when listing the podcasts, it puts a blue mark next to the episodes you haven’t listened to. So I don’t have to remember which episodes of JapanesePod101 I’ve listened to and which I haven’t. I’m sure the feature was easy to implement, but I really like it.

And then there’s the old nano versus the new one. The new one has a sturdier case, which feels better. I bought an iPod shield for it, but it seemed sturdy enough that the shield wasn’t necessary, and I thought I wouldn’t like the feel of the machine as much with the shield covering it. I was annoyed that, to get the 8GB model, I had to choose black (blue or pink seemed nicer to me), but I liked the storage more than the fashion statement. And the storage really does help – 4GB was fine, but with 8GB I don’t have to think twice before throwing music on it. (But I do have more than 8GB of music and podcasts stored in iTunes, so I’m sure I’ll be happy to upgrade again in the future.)

And another tiny improment. For the nano, they moved the headphone connector to the bottom, next to the dock connector – presumably there wasn’t room at the top after they moved the screen up. But the headphone and dock connector were right next to each other. So, when I was getting out of the car while shopping, I had the headphones plugged in (because I’d been listening to the iPod in a store) and the dock connector plugged in (with a radio transmitter on the other end, so I could listen in the car), and I had to unplug the headphones before my finger could squeeze the dock connector to unplug it.

Not a big deal, to be sure. But in the new model, they moved the dock connector to the side, so there’s room between the two, and I can unplug them independently. A little thing, but I’m glad they got that little thing right in the new model.

Having said that, my recent experience has made be a bit down on Apple. I’ve already complained about the latest iTunes version. The nano (at least the newest one, I can’t remember the old one) doesn’t work as well when scrolling through a long podcast description, for some reason. Worst of all, though, my new iPod was buggy. When first listening to it after syncing, it froze about half the time. Resetting cures the problem (toggle the hold switch back and forth, then hold down menu + center for a few seconds), after which it’s fine until the next time I sync, but it’s still broken. And there’s some haze under the screen – the case may be sturdier, but the screen still seems to have an issue.

So I went to the “Genius Bar” and got it replaced. You apparently have to make an appointment (which you can do from home); a bit annoying, but I’m willing to accept a bit of production leveling if it gives me a more predictable response time. I had to wait for a good 25 minutes or so, though, so I’m not sure it’s working in that regard. (While waiting, I got to watch the screen above the bar cycle through its spiel several times; you know, Apple, if you have to put information about how to reset an iPod on that screen, then maybe that’s a sign that the software is unacceptably buggy.)

Once I got to see somebody, he quickly acknowledged that I had a problem, and deserved a replacement; good for them. He then disappeared into the back room for 15 minutes or so; eventually, he emerged, saying that they didn’t have any replacement units, so they’d order one and I’d get a call in about three days. It actually took more like a week, but I now have my replacement (again, having to wait both before and during my appointment).

I have to think there’s a fair amount of waste in the process. It should have taken about 5 minutes to order the replacement, not 20: a couple of minutes to figure out that I needed a replacement, a minute to look on the computer to see if they had inventory, and then a couple of minutes to jot down my phone number and order the replacement. Instead, the genius wasted 15 minutes looking for one. Just what was he doing during that 15 minutes? Even if they don’t have electronic inventory control, you’d think that there would be one place to look for replacements. So I’m pretty sure that at least 13 of those 15 minutes were pure waste; if they’d get rid of that, their genius would have been about 4 times as efficient. Similarly, it shouldn’t have taken them as long to retrieve my replacement unit.

But I have a replacement now; I hope it works. I haven’t given it a try yet because I decided to put on the shield (in hopes that that would prevent the screen problem from recurring); the shield wants to set for a day before use. I’m still not sure the shield was the right idea; I don’t like the feel as much, and there are bubbles under it. The instructions say the bubbles will work out over the next couple of days; if not, I might be tempted to ditch the shield. I do wish I’d gone for the screen-only version of the shield instead of the full-body version: I see no reason for the latter in the latest generation nano.

the child garden

December 4th, 2006

Why hadn’t I heard of The Child Garden, by Geoff Ryman, until now? Not that it’s transcendantly wonderful (I thought that it was quite good indeed, though), but it was published in 1990, and seems like the sort of book I should have heard of over the last 16 years. I guess I need better sources, or to start paying attention more, or something.

The first subject classification in the Library of Congress info for the the next book on my list is “Abnormalities, Human—Poetry”.

a week of wii

December 2nd, 2006

Random Wii thoughts one week later:

  • I would have expected to feel stupid saying “Wii” all the time, but I actually don’t mind. I guess Nintendo wasn’t completely insane when they decided on that name.
  • Zelda is very good so far, except for the early fishing sequence. (I’m seven hours in, I think I’m a little more than halfway to the second dungeon. Of which there are apparently something like ten?) Darker than previous games; there were bits that I was happy Miranda wasn’t watching. Tougher than other recent games in the series – even in the first dungeon I was quite happy to have Liesl giving suggestions for how to proceed when we got stuck. But the puzzles have been fair. I’m still not sure it lives up to Okami‘s standards, but I’m cautiously optimistic that the series’s downward slide has been halted.
  • I wish I hadn’t gotten Super Monkey Ball. There probably is some solid gameplay there, but there are some basic playability issues. My experience with it: Miranda tried the single player, got a bit frustrated, and gave up after three levels. No problem there, maybe she’ll get back to it, maybe not. So then I went to play the single player game: it wouldn’t let me start from the beginning, but made me start on level four! Or rather, it would only let me play the earlier levels in practice mode; I tried that on the first one, and couldn’t figure out how to stop practicing the stupid level. (Eventually I found that in the manual, but I shouldn’t have to look there for something so basic.) After resetting (yay home button), I started from the fourth level on non-practice mode, and made it through the first group of eight levels. At which point I was confronted with some credits, which had an acceptable enough way to amuse yourself by collecting bananas, but I wasn’t in the mood for that. So I tried pressing all the buttons, but I couldn’t find a way to skip it. (Maybe that’s in the manual too; I haven’t looked, and I shouldn’t have to to.) I’m pretty sure I spent more time just watching the credits than playing through the levels that preceded them. Also the name entry screen only allows 6 letters (what is the point of economizing on that?), so you can’t, say, enter “Miranda”. And the method of entering your name is really annoying, worse than either the Gamecube method or the Wii screen keyboard method. The core gameplay is probably just fine, so I’ll probably get back to it, though I’m not convinced that adding jumps is an improvement. (Actually, I’m not even convinced that using the wiimote instead of a joystick is an improvement.) But there’s so much basic playability stuff they got wrong… I got it mostly for the minigames, but my inability to get a system and controllers before having friends over for Thanksgiving put a kibosh on that, and things I’ve heard since make me think that it doesn’t live up to its predecessors on that score.
  • Speaking of minigames, I’m enjoying Rayman. Good presentation, most of the games are decent. I’m not thtrilled with the ones that require you to shake one or both controllers up and down very quickly. (Doing that for a minute solid on the carrot juice level was not fun at all.) I did not realize what fine dancers rabbits are.
  • Still only one controller; Nintendo says that they are “preparing my order” for more, but who knows when I’ll actually get them. They claim that they’ve shipped my component cables, but UPS doesn’t acknowledge the tracking number; no big deal, games look fine to me. (I hear that component cables are more important on non-CRT displays.)
  • Miranda made a cute mii of Zippy today. Or as close as you can get to Zippy, given that it only lets you make humans, not dogs. I need to invite people over and force them to make miis. (Once I’ve got more controllers…)
  • WiFi is nice, but would it have killed them to include an ethernet jack as well? (I need to figure out why my WiFi is so flaky.) I see no evidence so far that the console is really online while it’s off/sleeping – no automatic downloading of system updates, no overnight store updates.
  • Zelda has a nice sensor bar calibration mechanism – why don’t they have that on the main system menu?
  • The controller might actually be a little worse in terms of RSI than a standard controller, but not a big deal either way. At least for my right hand; it’s kind of nice having my left hand be positioned however I want.
  • Using a shake of the left hand to accomplish something (e.g. the circle sword attack in Zelda) works really well; I hadn’t expected that.
  • I like the daily play record. (What, I spent 2:54 playing Zelda this afternoon? It felt like minutes…)

exploratory testing

December 1st, 2006

The Poppendiecks’ latest book gives an interesting analysis of types of testing. (Taken originally from Brian Marick’s blog.) They propose that you divide testing up in two different ways: on the one hand, you can classify tests as either intended to support programming or to critique the product. On the other hand, you can classify tests as either intended to be business facing or technology facing.

This gives you four quadrants. Tests that are technology facing and supporting programming are unit tests. As my loyal readers know, those are the best thing ever, so I won’t go into details.

Tests that are business facing and supporting programming are acceptance tests, or story tests. It took me a little while longer to appreciate these – I looked at tests initially largely through defect prevention goggles, and surely there couldn’t be any bugs left after my unit tests? Well, actually, it turns out that there could be: there are (many) fewer than if I hadn’t been doing pervasive unit testing, but many fewer than a lot is some, not none. Some of those defects are due to legacy code issues, but by no means all. And it’s not like I have a magic wand to get rid of legacy code, anyways.

In both cases, tests have more virtues than just preventing defects. They establish a contract, for one thing. In the unit test case, it might be a contract between programmers, or it just might be a contract between a single programmer’s fingers and the part of the programmer’s brain that cares about things working properly, but it’s a contract either way. In the story test case, it’s ideally a contract between programmers and business types; I still haven’t reached that world (it’s probably the area at work where we’re least agile), alas, but at the least it’s a contract between code and an imagined outsider. And they promote communication (between programmers, between programmers and business, between a programmer and the same programmer years or months or weeks later). And they promote design. In both cases, they’re automated, to make it as easy as possible for the programmer to run as many tests as possible.

Which is all great: better code, fewer defects, shorter debugging cycles, on and on. With all of that goodness, what more could you want?

Quite a bit, it turns out. There are people who say that it’s okay to have a testing department going through manual tests of your product: programmers have a conflict of interest which prevents them from seriously scrutinizing their code, so the only remedy is to have an army of testers to click through your interface to make sure it all works. Those people are wrong on a bunch of levels: for one thing, clicking through interfaces takes forever; for another thing, the programmers are the only people who know the corner cases; for a third thing, programmers aren’t so irresponsible as this suggests; for a fourth thing, the ways having a fast, comprehensive test suite improves your programming are so varied and positive that you’d be crazy to give it up for a slow external test cycle. It is true that having extra eyes doesn’t hurt; that’s why we would like to bring in business types to help with the acceptance tests, that’s why we pair program and have collective code ownership. Surely all that is good enough?

Well, no: even with a good set of acceptance tests, you’ll still find problems the first time you plop your product in front of a user poking around. A lot of that (at least in my case) can be chalked up to inadequate acceptance testing and inadequate business involvement in test design; still, if you’re like me, it takes a while to learn how to do good acceptance tests, and you’re probably dealing with legacy code which didn’t have proper acceptance tests to start with, and you need some way to learn where your acceptance testing skills need improvements. Playing with the product is a great way to do that.

Which brings us to the business facing / critique product quadrant: exploratory testing. (And useability testing.) People just poking around with your product, seeing what it does, pushing areas that might be limits. Not following a script: if you can script a test, you should work hard to automate it, to help support programming. (And if you find a defect during your exploratory testing, please do automate what you just did, so programmers can learn!) Just trying to look at the product with users’ eyes, seeing how it feels.

Like the earlier categories of tests, exploratory tests have virtues beyond finding defects. Even if you aren’t inserting defects into your code, you may have specification errors: your design may not work as well as you’d hoped when confronted with users. Or, for that matter, you may be playing around with various designs, trying to decide which is best. Or you may just need to communicate to somebody else in a visceral way what your product really does.

At work, we’d been slacking off on exploratory testing until recently: we were very engineering-focused, and the few people on the business side were too busy selling our product to have much time to play around with it. We’re doing better now (learning from our experiences), but we still have a ways to go.

So now I’m happy with three of the quadrants, though I still have a lot to learn. Which strongly suggests that my next revelation will be on the virtue of the fourth quadrant: property testing, from a technology facing point of view. These are perfomance testing, security testing, combinatorial error testing. Actually, maybe I got that revelation a year or so ago: we’d been doing performance testing for a while, which was all well and good and helped us catch a few performance regressions. But what was really eye-opening was when we started inserting random errors (deterministically, starting from a seed which changed every night but which allowed us to rerun the tests if problems arose) into the input of one component of the problem, which did a lovely job of uncovering defects. Again, if we’d been doing better in our unit testing, we wouldn’t have inserted the defects in the first place, but we’re not perfect, and we need ways to learn how to improve our testing skills.

We still have room for improvement on this front, though. We should write random error tests for more components. Our load tests take too long to calibrate, so we haven’t always kept them up to date as we use faster hardware.

A useful analysis; I wish I’d seen it a couple of years ago. (But, if I had, I probably wouldn’t have been able to appreciate it.) I like how it divides up the virtues of a traditional testing group: some of those virtues can better be gotten in other ways, indeed maybe all of them can. But the virtues are real and varied, so there are several kinds of blind spots you should work to avoid.

bonny doon

November 30th, 2006

If you do a Google search for “bonny doon”, the top three entries are currently:

I do not currently have a need for “Exceptional Alpacas for Stud”, but it’s good to know there’s a local source, should such an eventuality arise.

wii!

November 26th, 2006

I didn’t manage to get a Wii on launch day, but I was more successful today. I got to a Best Buy at around 7:10, a mere 2 hours and 50 minutes before it opened; that was good enough to get me the 19th of 26 Wiis that they were handing out. Fortunately, I had several hours of podcasts stored up, so the wait was perfectly pleasant; not the sort of thing I’d want to do every day, or even every year, but I’m not complaining.

About that, at least. I am complaining that Nintendo didn’t ship any extra controllers along with the consoles. I mean, if they’re trying to get families playing together, they should have included a second controller in the box, but not even having extra controllers available for purchase is just ridiculous. So the Wii Sports pack-in is doing us no good whatsoever. And I think they made some basic, stupid mistakes with Super Monkey Ball. Zelda is pretty good so far, except for a fishing sequence which is exceedingly annoying. (Liesl is getting frustrated with it as I write this.) We also got Rayman Raving Rabbids, which I haven’t gotten around to trying.

Liesl, Miranda, and I all made Miis; yay. They didn’t have any beards that were sufficiently majestic (the ponytail options were fairly lacking as well, for that matter), but I’m happy enough with my mii. Too bad they didn’t integrate it more thoroughly into the game play – why am I not asked to pick a mii every time I play a game? (And then given the option to use the Mii’s personal info instead of having to type my name in over and over.)

It’s taking a little while to get used to the controller, but I like it fine so far. The DS keyboard interface has spoiled me, of course, but I can deal with that. I can understand why, at a couple of points, I had to enter a password in plain text, but it was still disconcerting.

Still too early to have much of a feel for the system.

happy thanksgiving

November 24th, 2006

I hope that those of you who celebrate Thanksgiving had a nice one. We did; a congenial bunch of guests, a meal headed by cambodian chicken curry. Though there were other nice bits on the menu – in particular, Liesl made a very pleasant beef soup, also from The Elephant Walk Cookbook, and we made a very good (and easy!) chocolate cake from Bittersweet, which I continue to recommend highly. Zippy got bits from the soup as it was being prepared, and spent the entire meal asleep with a happily bulging stomach.

And I played several games of go today at KGS. I’d only played one other game in the last two or so years (other than the recent games against Miranda), and I hadn’t played online in more than a decade. But I had several quite pleasant games, people were very nice, I didn’t mind the online aspect as much as I’d feared (though I would hope I wouldn’t have lost one of the games in such a boneheaded fashion on a real board, but who knows), and I now have an official rating there. Of 6k, while information elsewhere suggests that, based on my AGA rating of 1k, my KGS rating should be about 4k. So, with luck, I should be able to bump it up a couple of stones.

My joseki knowledge has largely flown out the window. I should probably remedy that, but so far it doesn’t seem like a big deal – my other competitors’ joseki have also been a bit off, and they probably wouldn’t know how to punish my mistakes even if they did have joseki memorized. I’ve been surprised at how well I’ve been doing in the openings of games: that’s my traditional weakness, and even though I’ve probably been sandbagging a little, I wouldn have expected to come out of the openings more or less even at best.

We are, alas, moving to the main Menlo Park office – no more horses and beautiful scenery around. But maybe I’ll be able to occasionally get games against real players over my lunch break. I should see if there’s some Sun Menlo Park social mailing list where I can ask about that.

wordpress 2.0.5

November 24th, 2006

I just upgraded to WordPress 2.0.5; let me know if there are any problems.