[ Content | Sidebar ]

learning japanese: a month and a half in

August 15th, 2007

I’m on the fourth chapter of my Japanese textbook now, enough for a new set of difficulties to surface. All of which ring vague bells from a decade ago; I’m trying to do things right this time, which means that I need better strategies for facing these difficulties than I had last time.

One problem: when I claim I know a vocabulary word, when I move it from the “review regularly” stack of flash cards to the “mastered” stack of flash cards, I want that to mean that I really do know the word! But, for an uncomfortable number of flash cards, what is really going on is that I can reliably, upon seeing the front of the flash card, recite what is on the back of the card. Which isn’t the same thing.

Some aspects of that problem show up no matter what language you’re learning. For example, I usually only do my cards in one direction, so I regularly drill going from megane to “glasses” but not in the other direction. Also, their are grammatical issues: to really know a verb, you should be able to conjugate it at will, and recognize it in any of its forms.

Those particular problems aren’t that big a deal for me yet. I haven’t learned too much grammar, and I’m doing a pretty good job so far in being able to go from English to Japanese even though I’m drilling Japanese to English.

What is a big deal is the presence of kanji. This increases complexity in a few different ways. For one thing, I have to go between three forms of the word (kanji, pronunciation, and English) instead of just two forms (Japanese and English). And, of course, a single kanji character can have multiple pronunciations, which may or may not have multiple readings, and which may or may not be signalled by adding some kana at the end. (After some experimentation, I’ve decided to exile all the extra kana to the back of the card, instead of leaving it on front.)

That’s the obvious problem, but there’s also a more subtle one. When I see a vocabulary card, I see something I wrote by hand, taken from a limited number of other vocabulary cards that I’ve written. So when I see, say, the kanji for bijutsukan, what I really see is a card with three kanji characters on the front, where in this case I happen to have written the kanji characters a little smaller than would be ideal, and a little bit off center. And, honestly, that enough is almost enough to allow me to uniquely identify the vocabulary card from among my current set, especially if one of the radicals in one of the kanji seems familiar for some reason.

But, of course, that doesn’t mean that I know the word at all: if I saw those same three characters in a Japanese book, I would have almost zero chance of recognizing them as bijutsukan, and for that matter I’d be equally likely to mistakenly think that some other sequence of three characters might represent bijutsukan. I now appreciate what kids learning to read and write English are going through when they see a sequence of letters and guess that it’s some other word that happens to start with the same letter or two and is more or less the same length: they don’t have any deeper grasp of the phonetics of written English than I do of the radicals that make up a kanji character, and in both cases we quickly get overwhelmed by the task of really understanding how a word is written.

So what do I do about this? Part of my solution is to simplify the problem. I can adopt a classic agile planning technique: recognize that there isn’t a strong correlation between the difficulty of a task and its business value, and that, when chosing between two equally tasks of equal business value, you’ll get the quickest bang for the buck by doing the easier one first. What that translates to in this case is that, all things being equal, I should try to memorize words made up of as few kanji characters as possible. So one is best, two might be okay, especially if I’ve seen one of them before, three is unlikely to be a good idea. And not all kanji characters are created equal: given a choice, I should choose characters made up of as few radicals as possible, to increase the chance that I’ll be able to really know the whole character. (As opposed to, say, having the left side of the character trigger a memory in me.)

That alone isn’t good enough, though: it doesn’t leave me with a strategy for dealing with important but more complicated characters/words, and doesn’t directly address the complexity of what it means to learn a character. To really learn a character, I should be able to write it out myself, and be able to reliably tell it apart from similar-looking characters, characters with, say, the same radical on the left and on the upper-right but a different one on the lower right.

The answer to both of these aspects of knowledge is, for me, the same: I need to learn to love radicals. Once I really know the radicals, I won’t have to, say, recognize and reproduce the thirteen strokes making up a complicated character, I’ll just have to recognize and reproduce the three radicals making it up. That’s not a simple problem, given that there are about 200 radicals to grapple with, but it’s at least a tractable problem. Especially since the radicals in a character aren’t chosen arbitrarily: radicals have meanings on their own, so you can frequently build up the meaning of a larger characters out of the meanings of its radicals, and radicals can at times lend their pronunciation to the pronunciation of the entire character. So there’s real structure to work with here; as I buff up my radical credentials, it should become easier and easier for me to learn more and more complex characters.

And, fortunately, I’ve recently acquired an excellent book on the subject. It does a great job of showing how the characters evolved (and is historically accurate, as far as I can tell), and of gradually introducing radicals and showing how they add meaning in more and more contexts. So I’m gradually adding characters from that book into my stack of cards to memorize, even if I haven’t run into those characters in my textbook, and trying to remember the evolution of those characters in the bargain. Should make learning characters more fun, and easier.

That’s the main problem; there are a couple of other problems that I’m running into as well, though. One is that there are too many new words in each chapter for me to be able to memorize. I was worried about this three weeks ago: it seemed like my stack of unmemorized cards was getting longer and longer. Since then, I’ve been doing a pretty good job of moving cards into the memorized stack, but I don’t want to ignore the problem. (Especially since I’m now adding vocabulary cards from a source other than my textbook!)

Part of the solution is to simply not memorize every new word in each chapter. Each chapter introduces maybe 80-100 new words; I’m pretty sure that I can get away with only learning 40 or 50 of them right then. So I’m picking the ones that seem particularly likely to be important, or particularly likely to be easy to learn, and I don’t sweat the other ones for now. And if, in subsequent chapters, I keep on encountering a word that I didn’t memorize when it first showed up, then I can always learn the word later. It’s not completely clear that this is a scalable strategy – maybe, once I get to chapter 15, I’ll have to memorize 5 new words from each of the previous 15 chapters along with an extra 50 words from that chapter, which would suck – but I think it’s worth giving a try.

The second part of the solution is basic queue management: the problem here is an unbounded queue. And if you don’t want to have an unbounded queue, then put a cap on it! So I could adopt a rule that I can never have more than, say, an inch of unmemorize vocab cards in the box. Once I reach an inch, I have to do something else until the stack goes down: some combination of memorizing a smaller proportion of words in each chapter, taking longer to go through each chapter, and learning to be more effective at memorizing words. I don’t have an exam schedule or anything that I’m working towards: I want to do this right, and to do this right I need to balance my capacities, my time, and the number of words that I’m attempting, instead of letting artificial pressures skew my attempts at the cost of a loss of effectiveness.

So far, all the problems I’ve talked about have been about memorizing words, but it’s also starting to get a little harder to put everything in the chapter together. In the fourth chapter, for the first time, I had a bit of trouble doing all the exercises in the chapter the first time through, because of a combination of not having all the grammatical details, the usage details, and the words at my fingertips. I think that, for now, the best approach is to acknowledge that this is a potential issue, and be alert for warning signs. So I’m planning to go through the exercises in this chapter until I can do them all easily; if that means it takes three weeks to get through the chapter instead of two, that’s fine.

I imagine that further non-vocabulary issues will crop up as I go along: needing to memorize conjugations, for example. It’s been a while (almost 15 years! Ouch) since I’ve had to deal with that sort of thing, but I was once adequate at memorizing grammar, so I assume I’ll be able to do it again, and I don’t think Japanese holds any particular horrors in that area. And further holistic issues will appear: getting practice in reading actual books (and finding a suitable gradual series of books to practice that), practicing spoken Japanese. I imagine that, once those become urgent problems, outside guidance will be essential; fortunately, outside guidance shouldn’t be hard to find around here.

Fun stuff.

i guess that’s why they’re there

August 14th, 2007

A few months ago, I lost the foam covers on my earphones (standard iPod earbuds). I didn’t worry about it too much at the time; they sound fine without them.

A month ago, the left ear in one set of earphones died. I didn’t notice exactly when it happened; I chalked it up to kinking the wires, or something, and didn’t worry too much: I had extras.

While I was jogging today, the left ear in another set died; I don’t think I was doing anything in particular at the time, other than sweating.

So is the point of those foam covers to absorb sweat, so it doesn’t get into the earphones proper and screw things up? Or is my recent experience just a coincidence, and there’s another reason?

joshua bell in a subway station

August 12th, 2007

Several months ago, the Washington post wrote an article about Joshua Bell performing in a D.C. subway station. Almost nobody noticed him; he made some money (probably a good amount for a subway musician), but certainly didn’t attract any crowds or anything.

My first reaction was: I hope that I would recognize the quality of the performance, and even stop and listen for a while. And I’m enough of a snob that I still hope that I would recognize the quality of the performance! And on the surface of it, it does seem odd that people are willing to pay a hundred bucks to hear him perform in a concert hall, but walk right past him in a subway station. The more I think about it, though, the less sure I am that I wouldn’t walk past him, too.

The first answer to why some people walked past him while others paid lots of money to listen to him is, of course, that it’s not the same people making those choices. (And, in fact, they catch one person on camera who did see him perform in a concert hall recently, and who did stop to listen in the subway station.) There’s certainly something to that.

But I’m not happy with that answer. Yes, people don’t always notice beauty even if it’s sitting right in front of them. But if we take Joshua Bell as the exemplar of beauty, well, recordings of a wide range of his performance are a short Amazon search away; just how different is not buying one of those CDs from walking past him in a subway station?

Sure, it’s a few clicks and 15 bucks different, but I can scrape up 15 bucks without too much trouble these days and I’ve already done the clicks. They’re recorded performances instead of live ones; live performances are special, no question, but a recording studio has certain acoustic advantages over a subway station.

Which leads to this answer: access to beauty is, in general, not something in short supply in my life. What is in short supply is time, and a way to choose between the staggering amounts of beauty that are available to me. As ways to choose, stopping to pay attention to beauty that you walk past in a subway station isn’t a bad one. But back when I was a regular denizen of subway stations, my life wasn’t a soulless void that needed to be filled by famous performers: I was talking to friends or reading books in those subway stations, and the only reason I wasn’t listening to music was that I didn’t have as good portable audio options at the time. (Well, that plus I really like reading books.)

Saying that those are bad choices and that I should be listening to Joshua Bell instead is just being an elitist asshole. (To be clear, I’m not accusing the author of the article or Joshua Bell of being an elitist asshole: I have no reason to believe they are espousing that point of view. Though the author’s comment that “I bet Yo Yo Ma himself, if he were in disguise, couldn’t get through to these deadheads” makes me wonder, for a couple of reasons.) And, frankly, while I’m sure he’s a fine performer, I’d far rather have my current collection of CDs than an all-Joshua Bell collection of recordings.

By all means, pay attention when unexpected beauty enters your life, and go out of your way to fill your life with beauty. But beauty comes in countless forms; keep an open mind as to where you might find it, as to where others might find it.

And there’s something to be said for getting to your appointments on time, too…

more shuffle, please

August 12th, 2007

Last weekend, we were driving back from the Exploratorium, and were listening to the iPod in shuffle mode most of the time. As expected, it gave us a delightful selection to listen to: Stan Freberg (“There’ll Never Be Another War”, the Civil War version as opposed to the WWI reprise); a 10-second snippet of Katamari music; the title song from Rhinoceros Tap; two different Jewlia Eisenberg songs (one in her Charming Hostess incarnation, another as Red Pocket); some Herbert Grönemeyer (whom Miranda has turned into a fan of recently); Bernstein’s “The President Jefferson Sunday Luncheon Party March”; a portion of Mathis der Maler; some Andrews Sisters; a bit from Striking 12 that we skipped over since we’d listened to it on the drive up; and a few more pieces that I’ve forgotten. Hard to imagine a better way to spend a car ride.

And then, on Thursday, I was taking Miranda to daycare; what should the iPod decide to give me but “There’ll Never Be Another War”? Hmm, that’s a bit of a coincidence – which version of the song is it? Ah, “brother won’t fight brother”, Civil War again. Still, coincidences happen. After that same snippet of Katamari music, though, I was rather more suspicious, and “Rhinoceros Tap” sealed the deal. Though I did, after dropping Miranda off, fast-forward through fourteen songs and verify that we’d listened to all of them on our recent drive.

There are about 1200 songs in the iPod right now; clearly this is not a coincidence. I’d suspected problems like this in the past, but this was the first time that I’d gathered such compelling evidence. I guess they don’t bother to use a decent algorithm for picking new seeds for their random number generator? Which kind of boggles the mind – the device has a clock in it, so they can just use the current time as a seed! Not necessarily the only thing you’d want to use as a seed – I can imagine the clock dying, in which case you wouldn’t want shuffle to always return the same thing – but whatever they’re doing now sure isn’t good enough.

Maybe they keep a persistent seed which gets reset to zero when you reset the iPod? (Hopefully not when you just sync it, that would be too stupid for words.) And then gets bumped up each time you do some specific action (enter shuffle mode, maybe)? Because I do have to reset my iPod a few times a week; given that I only add or remove (non-podcast) songs once or twice a month, that could be a reason why I’m running into this particular problem.

Sigh.

mechanical assistance

August 6th, 2007

An interesting analysis of the beneficial effects of Bonds’ armor on his swing. Sounds plausible to me, if not 75-100 home runs plausible; I’d be curious to read further studies on the topic.

And, if it’s true, what’s the proper way to deal with the situation? I guess I’d lean towards allowing body armor for everybody, with some amount of mechanical/weight restrictions.

car models

August 4th, 2007

Another thing I learned on the trip: I can see why Ford retired the Taurus. (Though I guess they’re bringing it back.) I didn’t like ours from the moment I sat down in it, mainly because I felt like my head was banging against the ceiling. I realize that I like to sit farther forward than most people do in cars – otherwise, my arms start feeling like they’re getting RSI twinges – but if my Saturn Ion can give me plenty of headroom, why can’t a larger car? And I far prefer the acceleration and braking in the Ion, though admittedly I thought the Ion’s brakes were overly sensitive when I first started driving the car. Fortunately, I got used to the amount of braking required by the time we started to run into Boston drivers doing stupid stunts. (Otherwise, we would have run into them literally instead of just metaphorically.)

Liesl, however, is getting thoroughly sick of our older Saturn, with good reason: it’s required way too much maintenance. And, even when it works, I don’t enjoy driving it as much as the Ion. So we’ll be buying a new car this year, and one from a manufacturer whose quality we trust, which means Toyota or Honda. Not clear yet which model, though.

One question: do we want a slightly wider car, for those few times when we have five people in the car? Another question: how much do we care about mileage? A third question: is it posisble to find a Toyota dealer who isn’t a complete asshole? I think a Corolla would be too small, but a Prius, a Camry, a Civic, or an Accord would all be plausible choices.

Any recommendations?

belches

August 4th, 2007

Miranda is currently under the impression that the name of The Blue Danube Waltz is “the burping song”. She practiced some on vacation; she’s not nearly as good yet at burping melodically on demand as Wakko, but she’s definitely improving.

(Trivia: that’s not actually the regular Wakko voice doing the burps: they are stunt burps provided by Maurice LaMarche, the voice of The Brain.)

boston trip notes

July 29th, 2007

Some random notes from our recent trip to Boston and its environs:

  • T tokens are no more. Which made me a little sad, but I was very happy that, when arriving Tuesday evening for a trip where we’d be leaving the next Tuesday morning and would spend three days outside of Boston, there was a week pass available that was a good value. And I now know that kids under 12 can ride for free, but didn’t know that when buying the passes…
  • I was surprised that we got a good rate at the Park Plaza for a couple of days – is it normally affordable, or did we get lucky with a Tuesday/Wednesday request? Good location (though it took us a little while to find it, because we were confused by the construction at the Arlington T stop), and I could live without free internet access for two days. And an Amino set-top box on the TV – just like being at work!
  • Hampton Inn has decent internet access at no extra charge. Though I was pretty annoyed at the fake nameserver at the Norwood one that sticks in an ad page if an address doesn’t resolve. Especially the one evening when, for whatever reason, a fair number of lookups were timing out, poisoning any future requests to those domains for the next 15 minutes or so. Not good if you’re reading blogs and can’t get to feedburner.com any more…
  • I was impressed how we could get from downtown Boston to a turnpike entrance three short blocks away to out of town almost immediately. Especially since it doesn’t feel like there’s a turnpike cutting through downtown Boston, though I realize that I have walked on bridges over it several times.
  • Sturbridge Village turned out to be a really good choice for a place to spend much of a day. Enough stuff to keep us interested, very low key, we got to see 1820’s welding technology in practice, Miranda liked it too.
  • The suburbs that aren’t in the inner ring seem to kind of suck, at least near the arteries. I was not pleased with being stuck traveling at 5 miles an hour on 128 at 5pm, and route 1 in Norwood was not a place where I’d want to spend much time, if largely for aesthetic reasons.
  • Got to see a couple more retirement communities. I’m glad these things are around. (Though I’m sure there are bad ones out there, too.)
  • Didn’t get to see almost any friends or old haunts: we were too busy doing other stuff. Which is fine, actually: almost all of my Boston-area friends have moved away. I wish I’d had another day to just putter around places, but I can live with that.
  • The MGA is still active. Unfortunately, I couldn’t make it on a Tuesday or a Friday, so I didn’t get to see any of my old friends from the club, but you can get together a few people to play go on a Sunday at the Diesel Cafe. Which apparently opened about a year after I left the area; it’s a long narrow space (running all the way through the building from one street to the next), with good food and pleasant decor.
  • That day, about 75 percent of the people in the cafe were using laptops, and about 20 percent of the people were reading the latest Harry Potter. (Which had come out the day before.)
  • I enjoyed meeting blog reader Chris Ball in person (and other MGA members and Chris’s wife Madeleine), and we had a couple of exciting games – we turn out to be quite close in strength, conveniently! And I got to see the OLPC laptop in person, too.
  • Harvard Square is doing okay; a few stores I like closed, one out-of-place building has appeared, but no wholesale destruction. Wordsworth’s has closed (though their children’s book store still exists, didn’t go in to see what it’s like these days); Harvard Book Store is still open. (I also didn’t go into the Coop to see what it’s like these days.) I’d be willing to believe that the square is declining, but I’d also be willing to believe that it’s at a steady state.
  • And Schoenhof’s is still open. I broke my rule and bought several books that I don’t plan to read immediately, that indeed it’s not completely clear that I’ll ever read. But I was just so happy that the store is there! One book on learning kanji that I actually have started, a general Japanese grammar, and small individual books on verbs, particles, and connections (“Making your Japanese Flow”.)
  • Grammar and verbs are pretty basic concepts, but I like the ideas of books on particles and connections. I was going to say that those seemed like “only for Japanese” sorts of things, but of course there’s The Greek Particles.
  • We went to a couple of old favorite restaurants. The food at Chez Henri is still good, but the waitress we had drove me crazy. When I go out to eat, I do so for exactly two reasons: the food and the company of people I’m eating with. The waitress apparently thought that I had several other goals for the evening, prioritizing (among other things) her comedy routine above, say, getting us dessert menus. I am pleased to say, however, that the Elephant Walk still has both excellent food and excellent service. (Though it’s not that much better the food we make at home from their cookbook.)

Not sure when we’ll visit again, but I’m glad that we’ve managed to make it back every four years or so.

random links: july 28, 2007

July 28th, 2007

xml, html output

July 21st, 2007

My HTML output class is now at what I expect to be a reasonably stable state. It’s not by any means a perfect solution for the world’s HTML needs, but it can generate the output that I want without much excess typing, which is all that matters.

Actually, it divided into two classes this morning. First, XmlOutput:

  class XmlOutput
    def initialize(io)
      @io = io
      @indentation = 0
      @elements = []
    end

    def element(*element_and_attributes)
      if (block_given?)
        open_element(element_and_attributes)
        yield(self)
        close_element
      else
        write_indented_element(element_and_attributes)
      end
    end

    def inline_element(*element_and_attributes)
      "<#{element_and_attributes.join(" ")}>" +
        yield +
        "</#{element_and_attributes[0]}>"
    end

    def line
      if (block_given?)
        indent
        @io.write(yield)
      end

      @io.write("\n")
    end

    # FIXME (2007-07-21, carlton): Can I use define_method to
    # construct a method taking a block?
    def self.define_element(element, *attributes)
      module_eval element_def("element", element, attributes)
    end

    def self.define_inline_element(element, *attributes)
      module_eval element_def("inline_element", element, attributes)
    end

    def self.element_def(method, element, attributes)
      %Q{def #{element}(#{attr_args(attributes)} &block)
           #{method}("#{element}", #{attr_vals(attributes)} &block)
         end}
    end

    def self.attr_args(attributes)
      attributes.map { |attribute| attribute.to_s + "_arg, " }
    end

    def self.attr_vals(attributes)
      attributes.map do |attribute|
        '"' + attribute.to_s + '=\\"#{' + attribute.to_s + '_arg}\\"", '
      end
    end

    def write_indented_element(element_and_attributes)
      line { "<#{element_and_attributes.join(" ")} />" }
    end

    def open_element(element_and_attributes)
      line { "<#{element_and_attributes.join(" ")}>" }
      @indentation += 2
      @elements.push(element_and_attributes[0])
    end

    def close_element
      element = @elements.pop
      @indentation -= 2
      line { "</#{element}>" }
    end

    def indent
      @io.write(" " * @indentation)
    end
  end

I’ve given up on the whole public/protected/private distinction, for now: I don’t see much point in it for programming that I’m doing by myself. But I suppose it does have uses when explaining code to others: if you were to use the class directly, then you’d use element, inline_element, and line. The former is for an XML element that you deem important enough to put the opening and closing tags on their own lines (perhaps head and body for HTML); inline_element is for XML elements that you want to stick in the middle of lines (perhaps cite and a for HTML). And line is for text that you’re inserting, either passed as a string or generated via inline_element. They all take blocks, to either fill in the middle of the elements or the lines; two of them do something useful if not given a block, and the third could easily enough if I need that functionality. Oh, and the element functions have a crappy way of specifying attributes.

Which works well enough, but still requires more typing (in my case, manifesting itself as > 80 column lines) than would be ideal. Which is where the class functions define_element and define_inline_element goes in. Here’s HtmlOutput:

  class HtmlOutput < XmlOutput
    define_inline_element :a, :href

    define_inline_element :span, :class

    define_inline_element :li
    alias_method :inline_li, :li

    define_inline_element :title

    define_inline_element :h1
    define_inline_element :h2

    define_element :head
    define_element :body

    define_element :div, :id

    define_element :ul, :class
    alias_method :ul_class, :ul
    define_element :ul

    define_element :li

    define_element :link, :rel, :type, :href

    def html(&block)
      line { "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\"" }
      line { "  \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">" }
      element("html", "xmlns=\"http://www.w3.org/1999/xhtml\"",
              "xml:lang=\"en\"", "lang=\"en\"", &block)
    end
  end

This lets me create methods corresponding to the elements that I care about. If those elements take attributes (as in <a href=...>, I pass them as extra arguments (define_inline_element :a, :href), and the generated methods take arguments that are the values for the attributes. So, if I want to generate the following:

  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
  <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
    <head>
      <title>The Title</title>
      <link rel="stylesheet" type="text/css" href="styles.css" />
    </head>

    <body>
      <h1>Main Header</h1>
      <ul>
        <li><a href="http://site/page/">link text</a></li>
      </ul>
    </body>
  </html>

  o.html do
    o.head do
      o.line { o.title { "The Title" } }
      o.link("stylesheet", "text/css", "styles.css")
    end

    o.line

    o.body do
      o.line { o.h1 { "Main Header" } }
      o.ul do
        o.line do
          o.inline_li do
            o.a("http://site/page/") { "link text" }
          end
        end
      end
    end
  end

Admittedly, this isn’t the eighth wonder of the world or anything, but I do think the interface will work pretty well for the specific uses that I have in mind. Or maybe not – I read the relevant chapter in the Pickaxe book this morning; they describe a library with an interface basically identical to what I ended up with, but then comment that people almost never use it, typically preferring to use some sort of HTML template with embedded Ruby instead. And maybe I’ll switch to a solution like that as I get more used to the area.

However that turns out, there are two bits that I want to talk about. One is what I discussed in my previous post, that it was a lot of fun starting with a complex bit of output and refactoring my way into a class that generated it. I won’t yet propose that as the way to go in all situations, and I’m not even sure it actively helped me here: if I’d started out wanting to build up a solution from scratch instead of decompose one out of a monolithic print statement, I don’t see any reason to believe it would have turned out differently or gone any slower. But it was a very pleasant way to develop code, I’m confident it didn’t slow me down at all, and I only spent about 10 minutes of development time wondering what was the best thing to do next. If nothing else, it will give me further motivation to write my acceptance tests early: currently, I have them in mind from the start of a task, but I don’t usually actually write them until the code that they’re testing is finished. That delay isn’t usually for any good reason, it’s simply because I don’t yet like writing acceptance tests as much as I like doing other things, but if I can start to see real effects out of writing the acceptance tests earlier, I’d probably switch to doing so. (It would help if I started using Fit, too; for now, though, I’m not convinced I’m working in areas where that is an obvious win.)

The second bit I want to emphasize is that I love the way the definition of HtmlOutput looks. This is the second time in this project that I’ve done something like that: there’s a base class that implements class functions designed to let you provide functionality in a subclass without writing explicit method definitions in that subclass! Much more fun than sticking in protected hooks here and there, and when it works the subclass definitions are dramatically shorter (and freer of boilerplate repetition) than they would be if I were, say, programming in Java. As the FIXME comment shows, I’m not entirely comfortable with the implementation in this particular case, and now that I think about it, I’m not entirely comfortable with my implementation in the other case as well, but the fact that I can do it at all pleases me greatly.

So: I can generate one particular piece of HTML. Now I just have to have that HTML vary based on the contents of a database. Shouldn’t be too hard; I hope I’ll find a few more ways in which the implementation improves upon its Java counterpart.

generating html output

July 20th, 2007

One decision that I had to make when doing the HTML output part of my book database: should I roll my own HTML generator, or use somebody else’s? I ended up going the ‘roll my own’ route, partly because it sounded like more fun, and partly because it would be easier to get the acceptance tests working.

As written, the acceptance tests do a strict textual comparison, and it seemed unlikely that it would be easy to find another library that would generate code indented exactly the way I want. Admittedly, that’s a sign that the acceptance tests are overly strict, so the right thing to do would be to find a way to relax that validation, and in fact I think I already have code for that around. (I use it when validating that my output is legal XHTML.) But that combined with laziness and a desire for fun was enough to sway me.

And it has been fun! I started out with one unit test that checks for the output of a page for an author who hasn’t written any books. The easiest way to get that to work was to hardcode the expected output. So, at this point, my implementation was a function that spit out a really long string.

And then it was time to start refactoring. A single long string is hard to work with, so I broke it up into separate functions for the different parts of the page. Which, actually, I haven’t done much with; it will help me in the future, though. But next came a series of more immediately useful refactorings:

  • I had the output function generate an array of lines instead of a single multiline string.
  • I noticed that the lines in the arrays almost all started with whitespace, so I added another argument which is an amount of whitespace to add to all the lines in the array.
  • The indentation changes in predictable ways: so, rather than pass in “indent 6” here and “indent 8” later on, I had the HtmlOutput class have a member variable with the current indentation, and provided open and close member functions which added or subctracted 2 from the indentation level.
  • The adding and subtracting happen in conjunction with text, so let’s pass that text in as an argument to open and close
  • The opening and closing text consists of opening and closing tags: so let’s keep a stack of the elements that are in the current scope, and have close generate the closing tag automatically. (And provide a way to pass in attributes to the opening tag.)
  • Having to explicitly type open/close pairs violates my RIAA instincts; in Ruby land, that means that we should just have an element functions which generates the tags itself, and which takes a block as an argument to fill in the middle.
  • But what about element with no body, where the opening tag is the closing tag? No problem: I just won’t pass in a block there, and element can alter its behavior based on block_given?.

That’s where I am now. The next step is to handle elements that I put in the middle of a line (<cite>, <a>, etc.); I think I have a scheme that will work for that, but we’ll see where the refactorings lead me.

I’ve never programmed this way before, refactoring a class into existence based solely on a complicated chunk of expected output. I highly recommend the experience; it’s lots of fun, and has a rather unusual flavor. I’m being good and adding unit tests for all the methods I create; the thing is, though, that each method seems to last for about half an hour before it and its unit test get refactored out of existence, replaced by the next refactoring! For the longest, time, the HtmlOutput tests consisted of two tests, one of which was the result of the previous step of my refactoring and the other of which was the next step in my refactoring, which I was in the process of converting the existing AuthorPrinter object (the user of HtmlOutput) into using. Recently, though, more tests have been coming into existence, which I hope is a sign that I’m settling into a more useable and powerful interface.

My only regret was that I did most of this refactoring without an internet connection, so I couldn’t check all of the intermediate steps into my Subversion repository, and get a good view of the differences between steps. All well; maybe I’ll switch over to a more distributed version control system for my next project.

jason kendall

July 16th, 2007

I see we managed to foist Jason Kendall off on the Cubs. About time: not only is he blocking an actual prospect (Kurt Suzuki), his OPS is currently the worst in the majors by a full 40 points. He may have hit twice as many home runs this year as the last two years combined, but when that brings his total of Oakland home runs to 3, it’s not saying much.

I applauded the trade where we acquired him. I was wrong.

ipod, car, shuffle

July 15th, 2007

I’ve been using a radio adapter to play my iPod in my car for the last year. Which works well enough, and is unbelievably better than having to rely on the radio or CDs to listen to music, but has its downsides. There aren’t a lot of holes in the radio spectrum around here; I’ve found one or two that work acceptably on the commute to work, but even so I get more static than I’d like. And if I venture up to, say, San Francisco, the radio holes change, so I have to either give up or try to find a different place in the spectrum to broadcast.

By now, the value of this experiment has clearly proven itself, so I figured that I’d see if I could get my radio modified to have some sort of direct connection. Which turned out to be really easy in this case: the radio is designed to accept an external CD changer, and the mechanism that that uses turns out to be fairly general, so you can plug iPod adapters in there, too. Cost about 60 bucks, which is fine; a lot cheaper than buying a whole new radio just to get an extra jack.

It took two tries to get it right. The first time, they installed a unit that had the iPod controlled through the radio itself; maybe this would have been fine if I’d had a more flexible radio with a better screen, but it would have made the device almost completely useless in my scenario: I’m not sure there was even a way to switch from an episode of one podcast to an episode of a different podcast. Fortunately, I realized the problem before I left the lot; they were very good about straightening things out and installing what I really wanted once we realized we’d miscommunicated.

Actually, I’m not entirely sure if it’s what I really wanted: I still have a proprietary iPod connection coming out of my radio, and I feel a bit guilty about not sticking with open standards. The thing is, though, I’d need a proprietary dock connector somewhere, or else accept an inferior signal out of the earphone jack. And it turns out that what they installed is a two-part system, where there’s something connecting the radio to a pair of standard RCA jacks (or its moral equivalent) and a second gizmo that goes from the RCA jacks to my iPod (plus a power line to charge the iPod, which is nice but no big deal). So the non-proprietary part turns out to be nicely modularized: you can’t tell that from the outside, but there’s a nice standards-compliant bit hidden inside.

While I am talking about iPods: I don’t believe I’ve blogged about the virtue of shuffle play. I hadn’t gotten around to giving that a try for a while: I didn’t think I’d particularly like it more than any other way of listening to the iPod, and I was a bit worried about how it would interact with podcasts and with classical music. But when my Mac went in for repairs earlier this year, I had no way to update my podcasts; the repairs took longer than expected, so I ran out of saved episodes, and decided to give shuffle mode a try instead of listening to albums again. And it’s great!

My fears turned out to be unfounded. Podcasts don’t get thrown into the mix, which is clearly the right thing to do. A nice bit of design: it would have been easy to treat podcasts like any other music, just with a special tag (i.e. something could be “rock”, “classical”, “podcast”, etc.), but in fact they treat podcasts differently. This is one example; syncing rules are another; the “listened to” mark is a third.

And, as far as classical music goes, when I started doing this, the only classical music I had on the podcast was some Schumann lieder, both Glenn Gould recordings of the Goldbergs, and the Glenn Gould recordings of both WTC volumes. And all of that shuffled just as well as pop songs – I kind of wish that a prelude and its corresponding fugue got played together, but it’s not that big a deal. And I’ve even learned something: to my embarrassment, I couldn’t reliably tell whether a piece was a Goldberg variation or one of the preludes, but I’ve gotten much better now at distinguishing the two. I’ve since put more classical music on the iPod; it works fine.

Having said that, I doubt that, say, symphonies would work very well. Certainly the choice of track markers makes a difference: I have a CD of Peter Maxwell Davies’s Eight Songs for a Mad King and Miss Donnithorne’s Maggot, which puts that on two thirty-minute tracks. (As opposed to, say, splitting the first work into eight tracks.) When we ran into one of those on shuffle mode, it rather put a damper on the trip; we ended up hitting the next button to skip to the next piece in the shuffle, and I took them off the iPod when I got home. No big deal, really; if I’d really wanted to have those on my iPod, occasionally hitting ‘next’ wouldn’t have been a serious sacrifice.

On a similar “track placement” note, there are a few talking + singing CDs I own (Flanders and Swann, Arlo Guthrie and Pete Seeger) where each track is “song + subsequent talking” instead of “talking + subsequent song”, even though the talking always relates to the song after it instead of the song before it. Fortunately, I’ve listened to those CDs a zillion times so I know what they’re talking about anyways, and they’re entertaining enough speakers that I don’t mind hearing talking that isn’t connected to a song I’m about to hear.

So my concerns turned out not to be a problem in practice. And the benefits were real:

  • It got me listening to some of my old friends again.

Before this, if I wanted to listen to a piece of music from my library, I had to actively decide to do so, which usually meant actively deciding that I wanted to take the time to listen to a whole album. No problem for long drives; not something that I was finding time to do on my relatively short commute.

  • It’s something that everybody can agree with.

The rest of the family doesn’t want to listen to my podcasts; and if I ask Miranda what she feels like listening to, she’ll normally pick one of a handful of albums, most of which I don’t mind (in fact, Philadelphia Chickens is stunning) but which I also don’t want a steady diet of. But she’s happy to listen to most of my music, even though she doesn’t ask for it herself (perhaps because she doesn’t know what all is on there). Because of shuffle mode, she’s even turned into a bit of a Charlotte Martin fan, and our running into a couple of songs from Striking 12 in close succession got her asking to hear all of that album, which is now sitting in the CD player in her room.

  • It fits into gaps in my commute.

Occasionally, for example, I’ll be finishing up a podcast episode as I get off of the highway. I still have six or seven minutes until I get home, which probably isn’t enough for me to want to start another podcast. But shuffle play fits the gap nicely: I can go to shuffle mode and listen to a couple of songs over the course of the rest of my drive.

  • It’s a non-inventory buffer against variance.

I occasionally run out of podcast episodes to listen to. (Well, other than JapanesePod101 episodes, but I don’t want to overdose on that.) If I were to increase the number of podcasts that I listen to in order to minimize the chance of that happening, however, my queue of unlistened-to episodes would quickly grow out of control. But I couldn’t possibly consider driving or jogging without something to listen to; listening to the radio or manually selecting albums are both possibilities, but shuffle mode works a lot better.

Don’t get me wrong: I still mainly listen to podcasts, and I’m certainly not about to buy a shuffle-only iPod. And I’m not going to wax rhapsodic about insights from unexpected juxtapositions: it’s all music that I like to listen to individually, and am happy enough to listen to in any order, but there’s nothing deeper than that. But shuffle mode is great; if you have an iPod, find yourself in situations where you have 5-30 minutes to listen to it, and haven’t given shuffle mode a try, then I encourage you to do so.

queues, tags, blog posts

July 10th, 2007

As I’ve mentioned before, I read others’ blog posts using Google Reader. It shows me the unread posts in reverse chronological order, I go through them and read them; if I want to keep one around for a while for some reason or other, I hit the ‘s’ key to star it. If I run out of new posts and don’t feel like, say, writing here or going to bed or reading a book or something, I go to the starred posts and give them a look. I read a few and unstar them.

This worked okay for a while; recently, though, I noticed that my list of starred posts was getting longer and longer, and it was no longer clear what good those posts were doing me in general. I didn’t want to get rid of them all, but clearly the system wasn’t working.

But I should have decent queue management skills by now, no? So what can I pull out of my bag to deal with this? The Getting Things Done people talk about categorizing and emptying your inbox instead of just letting it build up; while my real inbox in this situation is unread posts, which I’m good at going through, I’m putting a big uncategorized stack right past that. Which is no good. So let’s see if categorization helps?

In my mail reader, I’d move things into folders; Reader has tags as an equivalent. (Except it’s supposed to be better since you can put multiple tags on an item. Which sounds like a good idea to me; not using that capability yet, but I can imagine it will come in handy.) So I went through the whole pile of starred posts, and tagged them all.

It took a little while to do the initial triage; I’m glad I didn’t put it off for any longer. Actually, it took me two phases: at first, I had some tags in mind (videos to watch, flash games to play, posts I’d left a comment on and wanted to return to the comment thread), which covered many of the posts, but I wasn’t sure of an appropriate tag for other ones. No problem; I just tagged those with ‘to-tag’ and kept on going. By the time I was done with my first pass in the list, I had a pretty good idea of what tags I wanted; a couple of days later, I went through the ‘to-tag’ bucket, tagged all of them with one of my other categories, and deleted that tag. So now all of my starred posts are in one of 12 buckets.

At which point, the utility of this exercise was clear. Some of the buckets really are pits that I’ll never clear out: videos and flash games simply do catch my eye at a higher rate than I’ll be able to go through them, and that’s okay. So if things in those categories moulder a bit, it’s not a big deal; periodically, I’ll delete some of the older ones without watching them/playing them, and I’m fine with that.

Other buckets are much smaller and get cleaned out more regularly. For example, one benefit was identifying that there were several blog posts that had something interesting to say that was a bit too much for me to deal with at 10:30 at night. I tagged those with ‘read’, and now I’ve gone and read all of them, and will be able to keep that bucket low in the future.

There are some other buckets that I also expect to clean out regularly. I keep a bucket of posts that I’ve commented on; those I’ll return to every day or two to see what others have to say, and then delete them when nothing new shows up. And I have a ‘blog’ tag for things that I’m considering mentioning in one of my ‘random links’ posts; now I have an easy way of collecting those, and I imagine I’ll just generate such a post every time that I have five or six items in that bucket.

I’m still not sure of my approach to all the buckets, but that’s okay; I’ll keep on experimenting and figure it out eventually. I’m definitely pleased with the results so far; I should really reread the GTD book and give the system a serious try.

What I am not pleased with is the Reader interface. Don’t get me wrong, it works well enough, and I’m sure I’m missing some ways to use it better, but there seem to me to be some pretty strange decisions here:

  • I try to always empty out my stack of unread items, which means that tagged items that I want to get back to have to be marked as read. (Otherwise, I’d have no way of distinguishing between things I haven’t looked at at all and things that I’ve looked at and am keeping around.) So, in the “all posts” screen, I want to only look at unread items; in a tagged post view, though, I want to look at all items. I have to manually toggle between these two modes, however: it’s not smart enough to either realize that looking at only unread tagged items doesn’t make sense or to simply remember when I want to look at unread items and when I want to look at all.
  • Actually, though, in the tagged items, I don’t want to look at all of them: I just want to look at the starred ones. That way, when I’m done with a saved item, I can type ‘s’ to unstar it and not see it again. This, however, isn’t possible: there’s no way to only looked at the starred items with a given tag. What this means in concrete terms is that there’s no simple way to delete a post: you have to type ‘t’ to edit the tags, and hit the delete key a bunch of times to erase the existing tag.
  • Speaking of editing the tags, there’s a bug (either in Reader or in Safari): when I add a tag to the post, the name of the new tag just sticks there in my browser window, even when I’ve moved on to other posts, until I click on the screen.
  • The tag entry screen is “helpful” in an incredibly annoying way. One of my first tags was ‘long-read’, for posts referring to documents that were long enough for me to need to set aside time to read them. Then I decided that I needed a tag for posts that I didn’t want to read right now but wanted to get back to when my brain was fresher. No problem, I’ll call that ‘read’; the helpful autocompletion will surely select that when I type ‘r’, no? No: when you type ‘r’, it lists all the tags containing an r. In fact, if you type ‘read’, it lists all the tags containing that string. And, if you type ‘read<return>’, which surely should mean that I want to tag the post with my existing tag ‘read’, it in fact selects the first tag in alphabetical order containing the substring ‘read’, which matches ‘long-read’. To get ‘read’, I had to type ‘rea’, then down arrow, then return. Which is just stupid; I ended up retagging all of my ‘long-read’ posts as ‘long’ and deleting the ‘long-read’ tag. Why autocompletion from the middle of a tag name is supposed to be a good idea is a mystery to me.

Maybe there’s something in the usage model that I’m missing, and maybe there is a way lurking to see only the posts I want in any given view while having single-key delete. I can’t quite see how, though. So a pity about the rough edges; still, it works well enough for now, and I hope they’ll improve it in the future.

random links: july 1, 2007

July 1st, 2007

array.join

June 30th, 2007

I was missing Array.join:

class Array
  def process_and_interpose(initial, middle, last)
    initial + (map { |i| yield i }).join(middle) + last
  end
end

switched over to ruby version of the cli tool

June 30th, 2007

I’ve switched over to using the Ruby version of the CLI tool for editing my book database; works great, as far as I can tell.

Short, too:

panini$ wc -l *.rb
    9 author_writer.rb
   18 book_writer.rb
   11 closeable.rb
   24 compound_author_writer.rb
   21 connected_database.rb
   30 connected_insert_row.rb
   24 connected_result.rb
   36 connected_result_row.rb
   37 connected_table.rb
   26 connected_write_row.rb
   60 date.rb
   21 decoder.rb
    9 developer_writer.rb
   85 editor.rb
   17 enumerable_helper.rb
   16 game_writer.rb
   23 link_writer.rb
   38 object_name.rb
   45 row.rb
   11 series_writer.rb
    9 system_writer.rb
   16 table.rb
  100 writer.rb
  686 total

(That’s only the production code; the unit tests add another 941 lines.) Hard to believe how long it’s taken to write, given the number of lines of code; I guess that’s what happens when you only work for an hour or two a week, don’t do that every week, are using a new language, and are working with a technology (SQL) that you’re not completely comfortable with. I hope the “generating HTML” part will go faster; I don’t see why not, since I should be able mitigate all of those problems except for “only work for an hour or two a week”.

I did the refactorings I had in mind after last time, and went and reread all the code looking for more. I found a few more areas for improvement, but in general I’m happy with how clean it’s been staying. I should write a tool to calculate lengths of methods: I’m curious what the proportion of one-line methods is.

super paper mario

June 29th, 2007

I wasn’t too excited about Super Paper Mario when I first heard about it. I certainly enjoyed the 2-D Mario games when I first played them, but the state of the art has changed, and nostalgia only takes me so far. So I’ll occasionally play a 2-D platformer and enjoy it, but I figured New Super Mario Bros. filled my quota of that for the next couple of years. And the 2-D/3-D switching sounded more like a gimmick than anything else. (Unlike, say, Crush, which makes me wish I had a PSP. Well, not really, but it definitely makes me wish they’d release it for the DS.)

Then I started hearing claims that it really was a follow up to the Paper Mario series, and I started getting curious. That was a fine series, and I wasn’t wedded to the details of its RPG mechanic; a game like that that replaced the turn-based battles with platformer-style jumping sounded great to me. By the time the game came out, it was on my “buy immediately” list.

And it really is pretty neat. Fans of the original will be happy to see eight world of four levels each. And, most of the time, you platform your way through the level, moving left to right more often than not. But there’s also a central world to wander through, low-key item and leveling-up mechanics, a (quite threadbare) plot. And some back and forth exploration within the levels, puzzle solving, party members, houses and shops.

Which all works nicely. None of it is wonderful – the 2-D/3-D transitions are a fun enough way to design the game, but hardly a relevatory new mechanic. (And it mutes the 2-D platformer aspects: there are essentially no hard-core 2-D platformer difficulties, because you can switch to 3-D to get around almost all of them.) And the party members of various sorts are rarely used (and it’s usually obvious when you need to use one of them), and leveling up just serves as a way to let you survive more complicated levels. Most reviewers complain about the amount of reading, especially at the start of the game; personally, I didn’t even notice that as a potential problem.

So a rather pleasant mix of 2-D platformer with Paper Mario-esque RPG aspects. Rereading the above, I don’t sound too excited, so let me be clear: outside of Wii Sports, which lives in its own dimension, this is the best game for the system. Which says as much about the youth of the system as the quality of this game, and I’m hoping that this fall brings a couple of games that are considerably better, but the game is a lot of fun, is very solidly constructed, and has enough new ideas and new ways to put together old ideas to be well worth playing.

learning japanese: initial hiccups

June 27th, 2007

I pulled out my Japanese textbook over the weekend and read the first chapter. All stuff I knew, so it went really fast – no big surprise.

So I pulled out my box of blank vocabulary cards, and started writing down words. At which point I felt like I was stuck in molasses.

Basically, my handwriting in hiragana sucks. Admittedly, my handwriting in roman script sucks, too, but I’m used to that, and if I slow down just a bit, I can produce writing that I don’t mind looking at. While, when writing in hiragana, I simply don’t know how to produce writing that I don’t mind looking at!

Part of the issue, I’m sure, is that I have basically no experience to hiragana outside of print or artworks. So I expect some of my issues are similar to somebody who was used to reading English in the Times font, had a hard time reproducing those serifs, but felt that writing looked weird without them. But I’m sure that there’s a lot of plain old practice required, too. (I bet practice will help with the basics of generating characters with the appropriate spacing and relative size, for example.)

Actually, I suspect that hiragana may be a bit tricky to generate neatly, as writing systems go: I’m not nearly as self-conscious about my kanji, it turns out, and I don’t remember being particularly self-conscious about my greek or devanagari. So hiragana may be a bit higher of a hill to climb than most. I was surprised to learn today that I was even getting the stroke order wrong on some of the characters; I’m sure that much of that is simple ignorance, but it also suggests that the characters don’t fit into patterns that I’ve learned to expect.

I’m optimistic that this will get better pretty soon. For one thing, I bet that I’ll gain a lot from just reminding myself to slow down. I usually scribble quite quickly, and correspondingly illegibly; if I were to take, say, two seconds per character, it would feel like a glacial pase, but I bet I could do a decent job of writing neatly without too much practice at that rate, and I’d still be able to churn out a bunch of cards in five minutes. Whereas now, I try to do it faster, but have to practice over and over again to get it right, more than eating up the time savings. Tonight already felt better than last time: I came armed with some practice sheets, and I spent a fair amount of time going over each character there before I wrote it on a card. But the results seem to be sticking: I just slowly wrote a ka on my palm with my finger, and I didn’t cringe in horror or anything.

I sure hope it gets better soon. There’s some virtue in having the process be a bit slow, so I don’t try to cram too much stuff into my brain at once, but I’m already finding it hard to make time to do this, and having the process of generating vocabulary cards slow me down excessively doesn’t make me any happier. Compounding the problem is that the book contains a fair amount of vocabulary, without much guidance as to which words to learn in each chapter. (As opposed to when you’re taking a class, where the teacher will give you a list of words to memorize.) So I think that I’ll probably end up basically trying to memorize them all, which means that I have to generate a lot of cards; the more time that takes, the less time I have to drill on them!

Another useful web site I’ve found: Real Kana is a nice, flexible drill for reviewing characters. I’ve just been using it for a few days and I’ve already swapped almost all of what I’ve forgotten back into my brain; I’m optimistic that, after not too much longer, I’ll be able to recognize individual characters completely reliably and fairly quickly. At which point I’ll want to switch to reading more Japanese passages written out in kana (as opposed to romaji or a kanji/kana mix), not as practice in figuring out what it means but as practice in drilling my brain in going from kana to sounds without an explicit recognition phase in the middle.

Speaking of which, another area where I wish my brain didn’t have to do as much of a recognition phase is numbers: whenever I hear somebody read a number out loud, it takes me seconds to decode it, which is way too long. I wonder if there’s some web site out there that can help me with that, too? Even a robotic-sounding voice would be a big help, I suspect.

weinberg on incremental construction

June 24th, 2007

I’m a fan of authors on construction whose works I can read in a programming context. On a related note, here’s a bit from Gerald Weinberg with a building/programming analogy that I like. (Quality Software Management, v. 4: Anticipating Change, pp. 216–217:

Imagine building a house by bringing all the parts to the lot, then having everybody run to the foundation and put their part in place, after which people walk around and see if the lights work or the floor collapses. There is no house test in house building to compare with the system test in system building. There are, instead, many incremental, intensive tests all throughout, especially when something is added that

  • other people will depend on
  • will be invisible (like wires and pipes in walls)

At every stage, the house must be stable. When it may not be, scaffolding is added so that the system of partially completed house plus scaffolding is stable. When the house becomes stable on its own, the scaffolding is taken away. Examples of scaffolding include concrete forms, extra framing, power brought to the site, and portable toilets.

Using the Stability Principle, we see that testing is not a stage, but a part of a control process embedded in everystage. What is often called system test is not a test at all, but another part of system construction, perhaps better named “system integration.” People are reworking errors in previous parts, and building the systems as they do.

Don’t get me wrong, all analogies are suspect, and I’m sure you would run into problems if you probed this one too far, but I liked it nonetheless. Incidentally, he uses “test” in a much broader sense than I normally do, including activities such as code and design reviews in the name.

I like the format of the book: it’s fairly free-form, but he frequently sprinkles in “Phrases to listen for” and “Actions to take”. The phrases in this example:

The following phrases warn a manager that the process of building while using stable phases has been or is about to be violated:

  • Just wait till it’s all done, then you’ll be surprised.
  • We’ll clean that up in system test.
  • The testers will fix that.
  • Of course we don’t have what we need, but get started anyway.
  • They can clean up the design when they write the code.
  • Ship it. The customers will tell us if anything is wrong.

My favorite of the phrases to listen for are those with a parenthetical note saying something like “(Warning: you may be saying this)”, as in this example from a section on fear:

  • You will do this. It’s nonnegotiable. (Listen carefully: This may be coming out of your mouth.)

The point, or at least one point, of the phrases is that people’s actions are often incongruent with their beliefs and/or with stated plans and goals, and that people have a way of making statements designed to lull the listener into not realizing that. So what you should be alert to are frequently statements that are soothing on the surface, instead of statements that are alarming on the surface.

I won’t give the complete list of actions from this example; an excerpt:

DO NOT allow tests to be skipped or postponed to later stages. Whatever is pushed to the end of the cycle will be sacrificed to the schedule.

DO be aware that tests take many forms. …

In general, reasonable practical advice.