- Ayse Sercan’s thesis work sounds really cool.
- I’m just linking to this list of Ruby techniques so that I’ll be able to find it a year from now when I’m in a position to better use it.
- More good stuff from Karl on copyright.
- Rice paddy art.
- Echochrome looks like an Escher video game.
- Quite the summary of the latest Harry Potter. (Don’t read it unless you don’t mind spoilers.)
- A neat way (ways, really) to look at go games.
random links: july 28, 2007
July 28th, 2007
xml, html output
July 21st, 2007
My HTML output class is now at what I expect to be a reasonably stable state. It’s not by any means a perfect solution for the world’s HTML needs, but it can generate the output that I want without much excess typing, which is all that matters.
Actually, it divided into two classes this morning. First, XmlOutput
:
class XmlOutput def initialize(io) @io = io @indentation = 0 @elements = [] end def element(*element_and_attributes) if (block_given?) open_element(element_and_attributes) yield(self) close_element else write_indented_element(element_and_attributes) end end def inline_element(*element_and_attributes) "<#{element_and_attributes.join(" ")}>" + yield + "</#{element_and_attributes[0]}>" end def line if (block_given?) indent @io.write(yield) end @io.write("\n") end # FIXME (2007-07-21, carlton): Can I use define_method to # construct a method taking a block? def self.define_element(element, *attributes) module_eval element_def("element", element, attributes) end def self.define_inline_element(element, *attributes) module_eval element_def("inline_element", element, attributes) end def self.element_def(method, element, attributes) %Q{def #{element}(#{attr_args(attributes)} &block) #{method}("#{element}", #{attr_vals(attributes)} &block) end} end def self.attr_args(attributes) attributes.map { |attribute| attribute.to_s + "_arg, " } end def self.attr_vals(attributes) attributes.map do |attribute| '"' + attribute.to_s + '=\\"#{' + attribute.to_s + '_arg}\\"", ' end end def write_indented_element(element_and_attributes) line { "<#{element_and_attributes.join(" ")} />" } end def open_element(element_and_attributes) line { "<#{element_and_attributes.join(" ")}>" } @indentation += 2 @elements.push(element_and_attributes[0]) end def close_element element = @elements.pop @indentation -= 2 line { "</#{element}>" } end def indent @io.write(" " * @indentation) end end
I’ve given up on the whole public/protected/private distinction, for now: I don’t see much point in it for programming that I’m doing by myself. But I suppose it does have uses when explaining code to others: if you were to use the class directly, then you’d use element
, inline_element
, and line
. The former is for an XML element that you deem important enough to put the opening and closing tags on their own lines (perhaps head
and body
for HTML); inline_element
is for XML elements that you want to stick in the middle of lines (perhaps cite
and a
for HTML). And line
is for text that you’re inserting, either passed as a string or generated via inline_element
. They all take blocks, to either fill in the middle of the elements or the lines; two of them do something useful if not given a block, and the third could easily enough if I need that functionality. Oh, and the element functions have a crappy way of specifying attributes.
Which works well enough, but still requires more typing (in my case, manifesting itself as > 80 column lines) than would be ideal. Which is where the class functions define_element
and define_inline_element
goes in. Here’s HtmlOutput
:
class HtmlOutput < XmlOutput define_inline_element :a, :href define_inline_element :span, :class define_inline_element :li alias_method :inline_li, :li define_inline_element :title define_inline_element :h1 define_inline_element :h2 define_element :head define_element :body define_element :div, :id define_element :ul, :class alias_method :ul_class, :ul define_element :ul define_element :li define_element :link, :rel, :type, :href def html(&block) line { "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\"" } line { " \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">" } element("html", "xmlns=\"http://www.w3.org/1999/xhtml\"", "xml:lang=\"en\"", "lang=\"en\"", &block) end end
This lets me create methods corresponding to the elements that I care about. If those elements take attributes (as in <a href=...>
, I pass them as extra arguments (define_inline_element :a, :href
), and the generated methods take arguments that are the values for the attributes. So, if I want to generate the following:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>The Title</title> <link rel="stylesheet" type="text/css" href="styles.css" /> </head> <body> <h1>Main Header</h1> <ul> <li><a href="http://site/page/">link text</a></li> </ul> </body> </html>
o.html do o.head do o.line { o.title { "The Title" } } o.link("stylesheet", "text/css", "styles.css") end o.line o.body do o.line { o.h1 { "Main Header" } } o.ul do o.line do o.inline_li do o.a("http://site/page/") { "link text" } end end end end end
Admittedly, this isn’t the eighth wonder of the world or anything, but I do think the interface will work pretty well for the specific uses that I have in mind. Or maybe not – I read the relevant chapter in the Pickaxe book this morning; they describe a library with an interface basically identical to what I ended up with, but then comment that people almost never use it, typically preferring to use some sort of HTML template with embedded Ruby instead. And maybe I’ll switch to a solution like that as I get more used to the area.
However that turns out, there are two bits that I want to talk about. One is what I discussed in my previous post, that it was a lot of fun starting with a complex bit of output and refactoring my way into a class that generated it. I won’t yet propose that as the way to go in all situations, and I’m not even sure it actively helped me here: if I’d started out wanting to build up a solution from scratch instead of decompose one out of a monolithic print statement, I don’t see any reason to believe it would have turned out differently or gone any slower. But it was a very pleasant way to develop code, I’m confident it didn’t slow me down at all, and I only spent about 10 minutes of development time wondering what was the best thing to do next. If nothing else, it will give me further motivation to write my acceptance tests early: currently, I have them in mind from the start of a task, but I don’t usually actually write them until the code that they’re testing is finished. That delay isn’t usually for any good reason, it’s simply because I don’t yet like writing acceptance tests as much as I like doing other things, but if I can start to see real effects out of writing the acceptance tests earlier, I’d probably switch to doing so. (It would help if I started using Fit, too; for now, though, I’m not convinced I’m working in areas where that is an obvious win.)
The second bit I want to emphasize is that I love the way the definition of HtmlOutput
looks. This is the second time in this project that I’ve done something like that: there’s a base class that implements class functions designed to let you provide functionality in a subclass without writing explicit method definitions in that subclass! Much more fun than sticking in protected hooks here and there, and when it works the subclass definitions are dramatically shorter (and freer of boilerplate repetition) than they would be if I were, say, programming in Java. As the FIXME comment shows, I’m not entirely comfortable with the implementation in this particular case, and now that I think about it, I’m not entirely comfortable with my implementation in the other case as well, but the fact that I can do it at all pleases me greatly.
So: I can generate one particular piece of HTML. Now I just have to have that HTML vary based on the contents of a database. Shouldn’t be too hard; I hope I’ll find a few more ways in which the implementation improves upon its Java counterpart.
generating html output
July 20th, 2007
One decision that I had to make when doing the HTML output part of my book database: should I roll my own HTML generator, or use somebody else’s? I ended up going the ‘roll my own’ route, partly because it sounded like more fun, and partly because it would be easier to get the acceptance tests working.
As written, the acceptance tests do a strict textual comparison, and it seemed unlikely that it would be easy to find another library that would generate code indented exactly the way I want. Admittedly, that’s a sign that the acceptance tests are overly strict, so the right thing to do would be to find a way to relax that validation, and in fact I think I already have code for that around. (I use it when validating that my output is legal XHTML.) But that combined with laziness and a desire for fun was enough to sway me.
And it has been fun! I started out with one unit test that checks for the output of a page for an author who hasn’t written any books. The easiest way to get that to work was to hardcode the expected output. So, at this point, my implementation was a function that spit out a really long string.
And then it was time to start refactoring. A single long string is hard to work with, so I broke it up into separate functions for the different parts of the page. Which, actually, I haven’t done much with; it will help me in the future, though. But next came a series of more immediately useful refactorings:
- I had the output function generate an array of lines instead of a single multiline string.
- I noticed that the lines in the arrays almost all started with whitespace, so I added another argument which is an amount of whitespace to add to all the lines in the array.
- The indentation changes in predictable ways: so, rather than pass in “indent 6” here and “indent 8” later on, I had the
HtmlOutput
class have a member variable with the current indentation, and providedopen
andclose
member functions which added or subctracted 2 from the indentation level. - The adding and subtracting happen in conjunction with text, so let’s pass that text in as an argument to
open
andclose
- The opening and closing text consists of opening and closing tags: so let’s keep a stack of the elements that are in the current scope, and have
close
generate the closing tag automatically. (And provide a way to pass in attributes to the opening tag.) - Having to explicitly type
open
/close
pairs violates my RIAA instincts; in Ruby land, that means that we should just have anelement
functions which generates the tags itself, and which takes a block as an argument to fill in the middle. - But what about element with no body, where the opening tag is the closing tag? No problem: I just won’t pass in a block there, and
element
can alter its behavior based onblock_given?
.
That’s where I am now. The next step is to handle elements that I put in the middle of a line (<cite>
, <a>
, etc.); I think I have a scheme that will work for that, but we’ll see where the refactorings lead me.
I’ve never programmed this way before, refactoring a class into existence based solely on a complicated chunk of expected output. I highly recommend the experience; it’s lots of fun, and has a rather unusual flavor. I’m being good and adding unit tests for all the methods I create; the thing is, though, that each method seems to last for about half an hour before it and its unit test get refactored out of existence, replaced by the next refactoring! For the longest, time, the HtmlOutput
tests consisted of two tests, one of which was the result of the previous step of my refactoring and the other of which was the next step in my refactoring, which I was in the process of converting the existing AuthorPrinter
object (the user of HtmlOutput
) into using. Recently, though, more tests have been coming into existence, which I hope is a sign that I’m settling into a more useable and powerful interface.
My only regret was that I did most of this refactoring without an internet connection, so I couldn’t check all of the intermediate steps into my Subversion repository, and get a good view of the differences between steps. All well; maybe I’ll switch over to a more distributed version control system for my next project.
jason kendall
July 16th, 2007
I see we managed to foist Jason Kendall off on the Cubs. About time: not only is he blocking an actual prospect (Kurt Suzuki), his OPS is currently the worst in the majors by a full 40 points. He may have hit twice as many home runs this year as the last two years combined, but when that brings his total of Oakland home runs to 3, it’s not saying much.
I applauded the trade where we acquired him. I was wrong.
ipod, car, shuffle
July 15th, 2007
I’ve been using a radio adapter to play my iPod in my car for the last year. Which works well enough, and is unbelievably better than having to rely on the radio or CDs to listen to music, but has its downsides. There aren’t a lot of holes in the radio spectrum around here; I’ve found one or two that work acceptably on the commute to work, but even so I get more static than I’d like. And if I venture up to, say, San Francisco, the radio holes change, so I have to either give up or try to find a different place in the spectrum to broadcast.
By now, the value of this experiment has clearly proven itself, so I figured that I’d see if I could get my radio modified to have some sort of direct connection. Which turned out to be really easy in this case: the radio is designed to accept an external CD changer, and the mechanism that that uses turns out to be fairly general, so you can plug iPod adapters in there, too. Cost about 60 bucks, which is fine; a lot cheaper than buying a whole new radio just to get an extra jack.
It took two tries to get it right. The first time, they installed a unit that had the iPod controlled through the radio itself; maybe this would have been fine if I’d had a more flexible radio with a better screen, but it would have made the device almost completely useless in my scenario: I’m not sure there was even a way to switch from an episode of one podcast to an episode of a different podcast. Fortunately, I realized the problem before I left the lot; they were very good about straightening things out and installing what I really wanted once we realized we’d miscommunicated.
Actually, I’m not entirely sure if it’s what I really wanted: I still have a proprietary iPod connection coming out of my radio, and I feel a bit guilty about not sticking with open standards. The thing is, though, I’d need a proprietary dock connector somewhere, or else accept an inferior signal out of the earphone jack. And it turns out that what they installed is a two-part system, where there’s something connecting the radio to a pair of standard RCA jacks (or its moral equivalent) and a second gizmo that goes from the RCA jacks to my iPod (plus a power line to charge the iPod, which is nice but no big deal). So the non-proprietary part turns out to be nicely modularized: you can’t tell that from the outside, but there’s a nice standards-compliant bit hidden inside.
While I am talking about iPods: I don’t believe I’ve blogged about the virtue of shuffle play. I hadn’t gotten around to giving that a try for a while: I didn’t think I’d particularly like it more than any other way of listening to the iPod, and I was a bit worried about how it would interact with podcasts and with classical music. But when my Mac went in for repairs earlier this year, I had no way to update my podcasts; the repairs took longer than expected, so I ran out of saved episodes, and decided to give shuffle mode a try instead of listening to albums again. And it’s great!
My fears turned out to be unfounded. Podcasts don’t get thrown into the mix, which is clearly the right thing to do. A nice bit of design: it would have been easy to treat podcasts like any other music, just with a special tag (i.e. something could be “rock”, “classical”, “podcast”, etc.), but in fact they treat podcasts differently. This is one example; syncing rules are another; the “listened to” mark is a third.
And, as far as classical music goes, when I started doing this, the only classical music I had on the podcast was some Schumann lieder, both Glenn Gould recordings of the Goldbergs, and the Glenn Gould recordings of both WTC volumes. And all of that shuffled just as well as pop songs – I kind of wish that a prelude and its corresponding fugue got played together, but it’s not that big a deal. And I’ve even learned something: to my embarrassment, I couldn’t reliably tell whether a piece was a Goldberg variation or one of the preludes, but I’ve gotten much better now at distinguishing the two. I’ve since put more classical music on the iPod; it works fine.
Having said that, I doubt that, say, symphonies would work very well. Certainly the choice of track markers makes a difference: I have a CD of Peter Maxwell Davies’s Eight Songs for a Mad King and Miss Donnithorne’s Maggot, which puts that on two thirty-minute tracks. (As opposed to, say, splitting the first work into eight tracks.) When we ran into one of those on shuffle mode, it rather put a damper on the trip; we ended up hitting the next button to skip to the next piece in the shuffle, and I took them off the iPod when I got home. No big deal, really; if I’d really wanted to have those on my iPod, occasionally hitting ‘next’ wouldn’t have been a serious sacrifice.
On a similar “track placement” note, there are a few talking + singing CDs I own (Flanders and Swann, Arlo Guthrie and Pete Seeger) where each track is “song + subsequent talking” instead of “talking + subsequent song”, even though the talking always relates to the song after it instead of the song before it. Fortunately, I’ve listened to those CDs a zillion times so I know what they’re talking about anyways, and they’re entertaining enough speakers that I don’t mind hearing talking that isn’t connected to a song I’m about to hear.
So my concerns turned out not to be a problem in practice. And the benefits were real:
- It got me listening to some of my old friends again.
Before this, if I wanted to listen to a piece of music from my library, I had to actively decide to do so, which usually meant actively deciding that I wanted to take the time to listen to a whole album. No problem for long drives; not something that I was finding time to do on my relatively short commute.
- It’s something that everybody can agree with.
The rest of the family doesn’t want to listen to my podcasts; and if I ask Miranda what she feels like listening to, she’ll normally pick one of a handful of albums, most of which I don’t mind (in fact, Philadelphia Chickens is stunning) but which I also don’t want a steady diet of. But she’s happy to listen to most of my music, even though she doesn’t ask for it herself (perhaps because she doesn’t know what all is on there). Because of shuffle mode, she’s even turned into a bit of a Charlotte Martin fan, and our running into a couple of songs from Striking 12 in close succession got her asking to hear all of that album, which is now sitting in the CD player in her room.
- It fits into gaps in my commute.
Occasionally, for example, I’ll be finishing up a podcast episode as I get off of the highway. I still have six or seven minutes until I get home, which probably isn’t enough for me to want to start another podcast. But shuffle play fits the gap nicely: I can go to shuffle mode and listen to a couple of songs over the course of the rest of my drive.
- It’s a non-inventory buffer against variance.
I occasionally run out of podcast episodes to listen to. (Well, other than JapanesePod101 episodes, but I don’t want to overdose on that.) If I were to increase the number of podcasts that I listen to in order to minimize the chance of that happening, however, my queue of unlistened-to episodes would quickly grow out of control. But I couldn’t possibly consider driving or jogging without something to listen to; listening to the radio or manually selecting albums are both possibilities, but shuffle mode works a lot better.
Don’t get me wrong: I still mainly listen to podcasts, and I’m certainly not about to buy a shuffle-only iPod. And I’m not going to wax rhapsodic about insights from unexpected juxtapositions: it’s all music that I like to listen to individually, and am happy enough to listen to in any order, but there’s nothing deeper than that. But shuffle mode is great; if you have an iPod, find yourself in situations where you have 5-30 minutes to listen to it, and haven’t given shuffle mode a try, then I encourage you to do so.
queues, tags, blog posts
July 10th, 2007
As I’ve mentioned before, I read others’ blog posts using Google Reader. It shows me the unread posts in reverse chronological order, I go through them and read them; if I want to keep one around for a while for some reason or other, I hit the ‘s’ key to star it. If I run out of new posts and don’t feel like, say, writing here or going to bed or reading a book or something, I go to the starred posts and give them a look. I read a few and unstar them.
This worked okay for a while; recently, though, I noticed that my list of starred posts was getting longer and longer, and it was no longer clear what good those posts were doing me in general. I didn’t want to get rid of them all, but clearly the system wasn’t working.
But I should have decent queue management skills by now, no? So what can I pull out of my bag to deal with this? The Getting Things Done people talk about categorizing and emptying your inbox instead of just letting it build up; while my real inbox in this situation is unread posts, which I’m good at going through, I’m putting a big uncategorized stack right past that. Which is no good. So let’s see if categorization helps?
In my mail reader, I’d move things into folders; Reader has tags as an equivalent. (Except it’s supposed to be better since you can put multiple tags on an item. Which sounds like a good idea to me; not using that capability yet, but I can imagine it will come in handy.) So I went through the whole pile of starred posts, and tagged them all.
It took a little while to do the initial triage; I’m glad I didn’t put it off for any longer. Actually, it took me two phases: at first, I had some tags in mind (videos to watch, flash games to play, posts I’d left a comment on and wanted to return to the comment thread), which covered many of the posts, but I wasn’t sure of an appropriate tag for other ones. No problem; I just tagged those with ‘to-tag’ and kept on going. By the time I was done with my first pass in the list, I had a pretty good idea of what tags I wanted; a couple of days later, I went through the ‘to-tag’ bucket, tagged all of them with one of my other categories, and deleted that tag. So now all of my starred posts are in one of 12 buckets.
At which point, the utility of this exercise was clear. Some of the buckets really are pits that I’ll never clear out: videos and flash games simply do catch my eye at a higher rate than I’ll be able to go through them, and that’s okay. So if things in those categories moulder a bit, it’s not a big deal; periodically, I’ll delete some of the older ones without watching them/playing them, and I’m fine with that.
Other buckets are much smaller and get cleaned out more regularly. For example, one benefit was identifying that there were several blog posts that had something interesting to say that was a bit too much for me to deal with at 10:30 at night. I tagged those with ‘read’, and now I’ve gone and read all of them, and will be able to keep that bucket low in the future.
There are some other buckets that I also expect to clean out regularly. I keep a bucket of posts that I’ve commented on; those I’ll return to every day or two to see what others have to say, and then delete them when nothing new shows up. And I have a ‘blog’ tag for things that I’m considering mentioning in one of my ‘random links’ posts; now I have an easy way of collecting those, and I imagine I’ll just generate such a post every time that I have five or six items in that bucket.
I’m still not sure of my approach to all the buckets, but that’s okay; I’ll keep on experimenting and figure it out eventually. I’m definitely pleased with the results so far; I should really reread the GTD book and give the system a serious try.
What I am not pleased with is the Reader interface. Don’t get me wrong, it works well enough, and I’m sure I’m missing some ways to use it better, but there seem to me to be some pretty strange decisions here:
- I try to always empty out my stack of unread items, which means that tagged items that I want to get back to have to be marked as read. (Otherwise, I’d have no way of distinguishing between things I haven’t looked at at all and things that I’ve looked at and am keeping around.) So, in the “all posts” screen, I want to only look at unread items; in a tagged post view, though, I want to look at all items. I have to manually toggle between these two modes, however: it’s not smart enough to either realize that looking at only unread tagged items doesn’t make sense or to simply remember when I want to look at unread items and when I want to look at all.
- Actually, though, in the tagged items, I don’t want to look at all of them: I just want to look at the starred ones. That way, when I’m done with a saved item, I can type ‘s’ to unstar it and not see it again. This, however, isn’t possible: there’s no way to only looked at the starred items with a given tag. What this means in concrete terms is that there’s no simple way to delete a post: you have to type ‘t’ to edit the tags, and hit the delete key a bunch of times to erase the existing tag.
- Speaking of editing the tags, there’s a bug (either in Reader or in Safari): when I add a tag to the post, the name of the new tag just sticks there in my browser window, even when I’ve moved on to other posts, until I click on the screen.
- The tag entry screen is “helpful” in an incredibly annoying way. One of my first tags was ‘long-read’, for posts referring to documents that were long enough for me to need to set aside time to read them. Then I decided that I needed a tag for posts that I didn’t want to read right now but wanted to get back to when my brain was fresher. No problem, I’ll call that ‘read’; the helpful autocompletion will surely select that when I type ‘r’, no? No: when you type ‘r’, it lists all the tags containing an r. In fact, if you type ‘read’, it lists all the tags containing that string. And, if you type ‘read<return>’, which surely should mean that I want to tag the post with my existing tag ‘read’, it in fact selects the first tag in alphabetical order containing the substring ‘read’, which matches ‘long-read’. To get ‘read’, I had to type ‘rea’, then down arrow, then return. Which is just stupid; I ended up retagging all of my ‘long-read’ posts as ‘long’ and deleting the ‘long-read’ tag. Why autocompletion from the middle of a tag name is supposed to be a good idea is a mystery to me.
Maybe there’s something in the usage model that I’m missing, and maybe there is a way lurking to see only the posts I want in any given view while having single-key delete. I can’t quite see how, though. So a pity about the rough edges; still, it works well enough for now, and I hope they’ll improve it in the future.
random links: july 1, 2007
July 1st, 2007
- Amazing walking wind-powered sculptures.
- Impressive optical illusion.
- Oh yeah? I’m building a topos in my attic.
- Not easy to make a shift like this.
- I’m a sucker for things like this. Given that I am interested in some sort of physical training and don’t seem to be getting around to restarting aikido, maybe I should give the Alexander technique a try?
- The 100 oldest currently-registered .com domains.
- Trace over Basho’s poetry.
- Want to prototype a game? And don’t mind if it’s, well, cute?
- Hoshi saga, minigames about finding stars.
array.join
June 30th, 2007
switched over to ruby version of the cli tool
June 30th, 2007
I’ve switched over to using the Ruby version of the CLI tool for editing my book database; works great, as far as I can tell.
Short, too:
panini$ wc -l *.rb 9 author_writer.rb 18 book_writer.rb 11 closeable.rb 24 compound_author_writer.rb 21 connected_database.rb 30 connected_insert_row.rb 24 connected_result.rb 36 connected_result_row.rb 37 connected_table.rb 26 connected_write_row.rb 60 date.rb 21 decoder.rb 9 developer_writer.rb 85 editor.rb 17 enumerable_helper.rb 16 game_writer.rb 23 link_writer.rb 38 object_name.rb 45 row.rb 11 series_writer.rb 9 system_writer.rb 16 table.rb 100 writer.rb 686 total
(That’s only the production code; the unit tests add another 941 lines.) Hard to believe how long it’s taken to write, given the number of lines of code; I guess that’s what happens when you only work for an hour or two a week, don’t do that every week, are using a new language, and are working with a technology (SQL) that you’re not completely comfortable with. I hope the “generating HTML” part will go faster; I don’t see why not, since I should be able mitigate all of those problems except for “only work for an hour or two a week”.
I did the refactorings I had in mind after last time, and went and reread all the code looking for more. I found a few more areas for improvement, but in general I’m happy with how clean it’s been staying. I should write a tool to calculate lengths of methods: I’m curious what the proportion of one-line methods is.
super paper mario
June 29th, 2007
I wasn’t too excited about Super Paper Mario when I first heard about it. I certainly enjoyed the 2-D Mario games when I first played them, but the state of the art has changed, and nostalgia only takes me so far. So I’ll occasionally play a 2-D platformer and enjoy it, but I figured New Super Mario Bros. filled my quota of that for the next couple of years. And the 2-D/3-D switching sounded more like a gimmick than anything else. (Unlike, say, Crush, which makes me wish I had a PSP. Well, not really, but it definitely makes me wish they’d release it for the DS.)
Then I started hearing claims that it really was a follow up to the Paper Mario series, and I started getting curious. That was a fine series, and I wasn’t wedded to the details of its RPG mechanic; a game like that that replaced the turn-based battles with platformer-style jumping sounded great to me. By the time the game came out, it was on my “buy immediately” list.
And it really is pretty neat. Fans of the original will be happy to see eight world of four levels each. And, most of the time, you platform your way through the level, moving left to right more often than not. But there’s also a central world to wander through, low-key item and leveling-up mechanics, a (quite threadbare) plot. And some back and forth exploration within the levels, puzzle solving, party members, houses and shops.
Which all works nicely. None of it is wonderful – the 2-D/3-D transitions are a fun enough way to design the game, but hardly a relevatory new mechanic. (And it mutes the 2-D platformer aspects: there are essentially no hard-core 2-D platformer difficulties, because you can switch to 3-D to get around almost all of them.) And the party members of various sorts are rarely used (and it’s usually obvious when you need to use one of them), and leveling up just serves as a way to let you survive more complicated levels. Most reviewers complain about the amount of reading, especially at the start of the game; personally, I didn’t even notice that as a potential problem.
So a rather pleasant mix of 2-D platformer with Paper Mario-esque RPG aspects. Rereading the above, I don’t sound too excited, so let me be clear: outside of Wii Sports, which lives in its own dimension, this is the best game for the system. Which says as much about the youth of the system as the quality of this game, and I’m hoping that this fall brings a couple of games that are considerably better, but the game is a lot of fun, is very solidly constructed, and has enough new ideas and new ways to put together old ideas to be well worth playing.
learning japanese: initial hiccups
June 27th, 2007
I pulled out my Japanese textbook over the weekend and read the first chapter. All stuff I knew, so it went really fast – no big surprise.
So I pulled out my box of blank vocabulary cards, and started writing down words. At which point I felt like I was stuck in molasses.
Basically, my handwriting in hiragana sucks. Admittedly, my handwriting in roman script sucks, too, but I’m used to that, and if I slow down just a bit, I can produce writing that I don’t mind looking at. While, when writing in hiragana, I simply don’t know how to produce writing that I don’t mind looking at!
Part of the issue, I’m sure, is that I have basically no experience to hiragana outside of print or artworks. So I expect some of my issues are similar to somebody who was used to reading English in the Times font, had a hard time reproducing those serifs, but felt that writing looked weird without them. But I’m sure that there’s a lot of plain old practice required, too. (I bet practice will help with the basics of generating characters with the appropriate spacing and relative size, for example.)
Actually, I suspect that hiragana may be a bit tricky to generate neatly, as writing systems go: I’m not nearly as self-conscious about my kanji, it turns out, and I don’t remember being particularly self-conscious about my greek or devanagari. So hiragana may be a bit higher of a hill to climb than most. I was surprised to learn today that I was even getting the stroke order wrong on some of the characters; I’m sure that much of that is simple ignorance, but it also suggests that the characters don’t fit into patterns that I’ve learned to expect.
I’m optimistic that this will get better pretty soon. For one thing, I bet that I’ll gain a lot from just reminding myself to slow down. I usually scribble quite quickly, and correspondingly illegibly; if I were to take, say, two seconds per character, it would feel like a glacial pase, but I bet I could do a decent job of writing neatly without too much practice at that rate, and I’d still be able to churn out a bunch of cards in five minutes. Whereas now, I try to do it faster, but have to practice over and over again to get it right, more than eating up the time savings. Tonight already felt better than last time: I came armed with some practice sheets, and I spent a fair amount of time going over each character there before I wrote it on a card. But the results seem to be sticking: I just slowly wrote a ka on my palm with my finger, and I didn’t cringe in horror or anything.
I sure hope it gets better soon. There’s some virtue in having the process be a bit slow, so I don’t try to cram too much stuff into my brain at once, but I’m already finding it hard to make time to do this, and having the process of generating vocabulary cards slow me down excessively doesn’t make me any happier. Compounding the problem is that the book contains a fair amount of vocabulary, without much guidance as to which words to learn in each chapter. (As opposed to when you’re taking a class, where the teacher will give you a list of words to memorize.) So I think that I’ll probably end up basically trying to memorize them all, which means that I have to generate a lot of cards; the more time that takes, the less time I have to drill on them!
Another useful web site I’ve found: Real Kana is a nice, flexible drill for reviewing characters. I’ve just been using it for a few days and I’ve already swapped almost all of what I’ve forgotten back into my brain; I’m optimistic that, after not too much longer, I’ll be able to recognize individual characters completely reliably and fairly quickly. At which point I’ll want to switch to reading more Japanese passages written out in kana (as opposed to romaji or a kanji/kana mix), not as practice in figuring out what it means but as practice in drilling my brain in going from kana to sounds without an explicit recognition phase in the middle.
Speaking of which, another area where I wish my brain didn’t have to do as much of a recognition phase is numbers: whenever I hear somebody read a number out loud, it takes me seconds to decode it, which is way too long. I wonder if there’s some web site out there that can help me with that, too? Even a robotic-sounding voice would be a big help, I suspect.
weinberg on incremental construction
June 24th, 2007
I’m a fan of authors on construction whose works I can read in a programming context. On a related note, here’s a bit from Gerald Weinberg with a building/programming analogy that I like. (Quality Software Management, v. 4: Anticipating Change, pp. 216–217:
Imagine building a house by bringing all the parts to the lot, then having everybody run to the foundation and put their part in place, after which people walk around and see if the lights work or the floor collapses. There is no house test in house building to compare with the system test in system building. There are, instead, many incremental, intensive tests all throughout, especially when something is added that
- other people will depend on
- will be invisible (like wires and pipes in walls)
At every stage, the house must be stable. When it may not be, scaffolding is added so that the system of partially completed house plus scaffolding is stable. When the house becomes stable on its own, the scaffolding is taken away. Examples of scaffolding include concrete forms, extra framing, power brought to the site, and portable toilets.
Using the Stability Principle, we see that testing is not a stage, but a part of a control process embedded in everystage. What is often called system test is not a test at all, but another part of system construction, perhaps better named “system integration.” People are reworking errors in previous parts, and building the systems as they do.
Don’t get me wrong, all analogies are suspect, and I’m sure you would run into problems if you probed this one too far, but I liked it nonetheless. Incidentally, he uses “test” in a much broader sense than I normally do, including activities such as code and design reviews in the name.
I like the format of the book: it’s fairly free-form, but he frequently sprinkles in “Phrases to listen for” and “Actions to take”. The phrases in this example:
The following phrases warn a manager that the process of building while using stable phases has been or is about to be violated:
- Just wait till it’s all done, then you’ll be surprised.
- We’ll clean that up in system test.
- The testers will fix that.
- Of course we don’t have what we need, but get started anyway.
- They can clean up the design when they write the code.
- Ship it. The customers will tell us if anything is wrong.
My favorite of the phrases to listen for are those with a parenthetical note saying something like “(Warning: you may be saying this)”, as in this example from a section on fear:
- You will do this. It’s nonnegotiable. (Listen carefully: This may be coming out of your mouth.)
The point, or at least one point, of the phrases is that people’s actions are often incongruent with their beliefs and/or with stated plans and goals, and that people have a way of making statements designed to lull the listener into not realizing that. So what you should be alert to are frequently statements that are soothing on the surface, instead of statements that are alarming on the surface.
I won’t give the complete list of actions from this example; an excerpt:
DO NOT allow tests to be skipped or postponed to later stages. Whatever is pushed to the end of the cycle will be sacrificed to the schedule.
DO be aware that tests take many forms. …
In general, reasonable practical advice.
welcome, jordan!
June 24th, 2007
go refactoring!
June 24th, 2007
In our last installment, we had this code:
def parenthesized_list(array) array.process_and_interpose("(", ",", ")") { |element| yield element } end class Array def process_and_interpose(initial, middle, last) inject_with_index(initial) do |memo, element, i| memo + yield(element) + (i != length - 1 ? middle : last) end end end
I’d extracted the latter method not because I thought I was likely to need it, but because I thought the original implementation of parenthesized_list
was insufficiently evocative.
But then today I was finishing off the Ruby version of my CLI tool, so I needed to update entries in existing rows in SQL tables, instead of just adding new rows. And the syntax is different: instead of
INSERT INTO people (id, name, age) VALUES ('256', 'Fred', '25');
the syntax is
UPDATE people SET name = 'George', age = '36' WHERE id = '256';
Which seems like a rather gratuitous difference to me, though I admittedly don’t know SQL nearly well enough to know if there’s a good reason for it.
No parenthesized lists in sight, but that’s okay: my newly extracted process_and_interpose
function does great!
def update_string "UPDATE `#{@table.name}` SET #{assignments} WHERE `id` = #{id}" end def assignments @updates.to_a.process_and_interpose("", ",", "") do |assignment| "`#{assignment[0]}` = #{quote(assignment[1])}" end end
The Ruby version of the CLI tool seems to work fine now, incidentally. I haven’t flipped the switch yet and started using it for real, but as far as I can tell there’s no reason not to: it passes all the acceptance tests. (And they run faster than they did under Java; no idea why, but I’m pleasantly surprised.) There’s a bit of refactoring to do, and at some point I might want to think about what the implementation is telling me about my class hierarchies (or, indeed, about the differing importance of class hierarchies in dynamic and static languages), but all in all I’m quite happy.
parenthesized_list revisited
June 23rd, 2007
I previously lamented this code:
def parenthesized_list(array) list = "(" first = false array.each do |element| if (first) list += "," else first = true end list += yield element end list + ")" end
I still haven’t found a magic bullet in Enumerable
or Array
which will let me dramatically shrink it. But I have at least teased out some of the components; this is what I’m using for now:
def parenthesized_list(array) array.process_and_interpose("(", ",", ")") { |element| yield element } end class Array def process_and_interpose(initial, middle, last) inject_with_index(initial) do |memo, element, i| memo + yield(element) + (i != length - 1 ? middle : last) end end end module Enumerable def inject_with_index(initial) result = initial each_with_index { |element, i| result = yield(result, element, i) } result end end
inject_with_index
doesn’t seem like a crazy idea; process_and_interpose
is a bit specialized, but that’s fine.
Is there some way I can shrink the implementation of inject_with_index
? I get the feeling that there’s some sort of generalization staring at me there, but I can’t quite figure it out. If I’m just shrinking code, I could keep on storing in initial
instead of introducing a new variable result
; I’d want to rename the variable, though. Maybe this?
module Enumerable def inject_with_index(memo) each_with_index { |element, i| memo = yield(memo, element, i) } memo end end
I don’t think I like that so much, though: naming the (non-block) argument memo
instead of initial
it makes it harder to figure out how it gets used. So I kind of prefer calling the argument initial
at the start, and then renaming it to result
in the body to reflect the implementation.
And of course parenthesized_list
is funny in that it just wants to pass along the block that it’s been given, but has to create a new block to do that. That, I think, reflects one of Ruby’s warts: there’s this weird block/procedure distinction that doesn’t, as far as I can tell, buy you much. It’s nice to be able to write blocks on the fly, but why not require functions taking one to make the block argument explicit and get rid of yield
? I’m not sure of all the implications, but I don’t think that Ruby’s current choice is the best.
more groovelily rhymes
June 23rd, 2007
Some songs from Striking 12 came up in shuffle mode on the way to day care earlier this week, and Miranda decided she wanted to listen to more of it. So we’ve been making our way through the album.
Some rhymes I was amused by, both from “Resolution”:
What’s there to celebrate about?
I’d rather stay at home and grout
My shower stall than watch the ball:
I won’t go out.
I ask you: what’s not to enjoy?
My cat, my couch, no hoi polloi.
I don’t need my coat, just my remote
And my La-Z-Boy.
A very good band, to the extent that I find them rather frustrating: there are stretches where I love the lyrics (for both wordplay and story reasons), where I love the tunes, where I love the instrumentation, where I love the way they use their voices. And even stretches where they put all that together. But there are also a surprising number of songs (given the quality of their peaks) that just don’t click for me. If they were just a bit more consistent, I would be happy to shout their name from the rooftops; as is, I’ll happily recommend Striking 12, at least. (Their latest, A Little Midsummer Night’s Music, didn’t do anything for me on first hearing, alas.)
japanesepod101
June 20th, 2007
Now that I’ve finished my book queue, my next major queue to work through is my backlog of JapanesePod101 episodes. I first subscribed to that podcast just a few months after it started, but it took me several months after that to start really paying attention to it; by the time I got hooked, I was way behind.
It’s a remarkable podcast. Daily Japanese lessons, presented in such a way that reminds me of the recommendations from the late lamented Creating Passionate Users blog. From the beginning, it was all focused on the users. Some of that was straightforward (but not so common) community-building stuff – in the early months, they had a news post every Sunday, and it was full of expressions of gratitude for all the reviews, sounding sincerely thankful and amazed at how well it was going. (I haven’t participated in their forums, but they sound like lively places, too.) But the content itself was all focused on what users, too: rather than talking about how clever they are, they were focused right from the beginning on how, when you go to Japan, you’ll be able to find your way around and talk to people. And done in a style full of personality and, as far as I can tell, honest expressions of themselves: I’m sure it would drive some people crazy, but if it works for you, it’s great.
And it worked for me. I’d flirted with learning Japanese before, so some of the lesson series (Survival Phrases) were easy for me. (Though even those I wasn’t bored by, and if I were actually traveling to Japan, I’m sure lots of the specific topics would be very useful.) The Beginner series was just right for me, though: I had to pay attention to it almost from the first episodes if I wanted to understand everything, and I would finish each lesson by listening to the opening dialogue over again to make sure I got it all. (And once every few weeks I’d have to listen to an entire lesson twice, because of something I missed.) I’d gradually learn more stuff as the weeks went by; about 150 episodes of that series later, I’m still feeling that it’s a great level for me, still providing an appropriate challenge level.
The Intermediate series started out too hard for me, and continues that way, but I still like listening to it just to get the sound of the language in my ears. (And they recently commented that the early Intermediate lessons are easier in some ways than the current Beginner lessons; I went back and listened and, you know, they’re mostly right! Wow!) There’s other nice stuff, too, like Japanese Culture Class episodes once every week or two.
So it’s a great mix of stuff: lessons for a range of levels, and I enjoy even the levels that aren’t targeted at me. Actually, I’m glad that they’re not all targeted at me: I would simply be unable to keep up with seven lessons a week at the Beginner level. For one thing, I’m pretty sure I’d burn out: I have enough experience with learning stuff that I know that, if I push myself hard, it’s a lot of fun for about three months and then I just run into a brick wall. And, for another thing, the Beginner lessons demand enough concentration that I can’t listen to them while driving, so I basically only listen to them when jogging or grocery shopping, which I don’t do every day. (In contrast, the lessons that are either easier or harder are fine while driving, though I’ve gotten in the habit of pairing one Beginner lesson with one other type of lesson every time I jog, and so rarely listen to anything in the car.)
Unfortunately, it took several months for me to realize how much I liked it, and several more months for me to work up to a reliable schedule. With a daily podcast, you can fall behind really fast; I’m pretty sure I was more than a year behind at some point. I’ve been catching up since; I’m up to the middle of last November, and my current pace has me going at about 5/3 real time without signs of burning out. My goal now is to be completely caught up a year from now; I’ll check back next summer and let you know how it’s gone!
My fondness for the podcast is actually forcing one rather tough decision on me. A year ago, I thought about what to do next; two main contenders were learning Japanese and learning Ruby. I decided to do the latter (though hardly single-mindedly), and I still have quite a lot in that vein that I want to do. Having said that, I’ve been devoting enough time to Japanese as well that it would be a bit of a shame to lose that, and I’m afraid that, without some effort to consolidate my knowledge, it will get rather less satisfying soon.
Let me be clear: I don’t consider myself to really be learning Japanese now. I’m listening to podcast episodes and being exposed to new vocabulary and grammatical structures in such a way that, at the end of each episode, I can listen to the dialogue at the start and feel that I understand it. But I couldn’t typically engage in a similar dialogue myself, or feel confident that I’ve really mastered the grammar involved. It’s the difference between responding to cues in context after a reminder and really knowing something, and it’s not the fault of the podcast: that’s all you can hope from 10 or 15 minutes a day without study outside of the podcast. (Their website provides tools for that study, should you choose to use them and to pay money.)
And I’m afraid that, as the grammar gets more complex, it will become more obvious to me that I really don’t know the material, and will be harder and harder to get as much out of the episodes. In fact, my gaps are already starting to be painful: I’ve just gotten to the part where they introduced the Lower Intermediate lessons, and the dialogue in the lesson notes is in kana instead of romaji. I can puzzle out kana just fine, but I can’t read it with anything like the fluency with which I can read Roman characters; that’s exactly the sort of thing that I should be able to learn how to do if I just take time to practice it and that would help me a lot in providing a solid foundation for other sorts of learning.
And I’m sure there are a lot of basic vocabulary and grammatical structures that would similarly repay a bit more concerted study. I don’t necessarily want to immediately memorize everything new in each Beginner lesson, but it would help if I had the material from, say, six months earlier down pat. If I could do that, I think I really would be on the path to learning Japanese.
So I’m starting to think that it’s time to break out my old textbook, start writing down vocabulary flash cards, and get to work. Or maybe buy a new textbook – one of my coworkers was greatly amused by the “for Today” part of the title, and somehow my showing him the insert explaining that, in this modern world of 1988, color televisions are a standard appliance in Japanese households, didn’t convince him of its modernity. But I think it’s pretty well written, so I’m planning to stick to it – after all, I have JapanesePod101 to explain modern vocabulary to me, so I won’t be left in the dark if I hear a hip Japanese person referring to the governor of California as “Shuwa-chan”. Figuring out how to budget time for that is not going to be easy; I think it can be done, but I want to think it over for a bit before committing. I suppose, though, there is one bright side to this lack of time: when I flirted with learning Japanese in grad school, I was unsuccessful largely because I took it too fast and burned out; time pressures should do a good job of preventing that from happening this time, I hope.
I’ll let you know how it all turns out a year from now, when I’ve finished off this queue.
is every single one broken, or what?
June 18th, 2007
I don’t know how I missed this interview with a Microsoft bigwig about Xbox 360 failures. Even if you have zero interest in video games, it’s a stunning display of interview stonewalling.
My first reaction: how can they possibly think that responses like this are a good move? My second reaction: but what if this response really is a good move? How bad would the problem have to be for that to be true?
console buying thoughts
June 17th, 2007
I just fired off a letter to my local paper’s video game journalists, in response to a question they asked in their latest podcast. And, of course, blogging has trained me to never type something longish without thinking about whether it could be considered even vaguely relevant for the blog! So: a lightly edited version.
You asked in your podcast if people were avoiding buying a 360 because of the quality problems; I sure am. I think it probably has the best set of games (for me) of any console this year: I’m really excited about Mass Effect, Eternal Sonata, Bioshock, GTA IV, and there are a fair number of already released games that I’d like to play. And when Mass Effect was supposed to come out in May, I was seriously thinking of buying one, to avoid the summer slowdown. I didn’t want to, though, primarily because of the serious quality problems but also because $400 is more than I’d like to spend on a console.
Now, though, none of those games are scehduled to come out until fall. I’m still excited about them, but right now it looks like a better strategy for me is to stick with the Wii for the rest of the year (which won’t exactly be a deprivation, given Metroid and Mario and Smash Brothers). Presumably Microsoft will come out with a 65nm model this fall that will be cheaper and more reliable and quieter, so maybe I’ll buy that; the quality problems with the first model were so bad, though, that I think I’d rather wait for a few months after the new model is out to see if it really is better.
In fact, now I’m thinking that my console purchase of the year will be a second DS so that Liesl and I will both be able to play it while we’re on vacation this summer. It’s looking like I won’t be done with Etrian Odyssey by then, she’s not yet done with Elite Beat Agents, and both of us will want to play the second Phoenix Wright game. Who knows, maybe I’ll even break down and finally buy a Pokemon game! And Miranda can play some of those games, and there will be Picross and Brain Age 2 later in the summer. (And Brain Buster Puzzle Pak was just released – I really like some of the obscure Japanese puzzle types it includes, like Nurikabe. But why isn’t it available from Amazon?)
On the summer DS game note: why on earth isn’t Nintendo releasing Picross and Brain Age 2 at the start of the summer instead of the end of the summer? They both sound like perfect games to play if you’re on a plane flight or decompressing in your hotel room or something. The whole industry’s lack of summer games completely mystifies me – it’s when kids have the most free time, and if nothing else you’d think the handheld market would be flooded during the summer.
I’m a hardcore gamer (albeit with relatively broad tastes); if the quality problems and price are causing me to buy a second DS instead of a 360, then Microsoft has really screwed up. The first wave of 360 games weren’t enough to grab me, but the second wave very much would be, despite the high price, if the console’s quality were anything close to normal…
ruby talking to mysql
June 16th, 2007
My current programming project at home is to port my dbcdb code from Java to Ruby. So far, I’m working on porting over the CLI tool, which lets me update the database to add books that I’m reading, update information about them, etc.
Until today, I’d been using a fake database abstraction that I made up; today, I started plugging in the real MySQL stuff. Looking at my svn commit history, I see it took me an hour and 20 minutes to get the first bits working with MySQL, which I think is pretty good given my vast ignorance of SQL. It would have been faster if I’d had an interface to work with that was closer to the JDBC interface, because I’m a little familiar with the latter (and in particular it had affected my fake database abstraction), while I have to look up the syntax every time that I have to write raw SQL to add rows / modify rows in a table. I had a particularly fun 15 minutes where I was getting an SQL syntax error stemming from the fact that I’d used “order”, which is a reserved word in SQL, as one of my column names. Eventually I noticed that I’d enclosed the column name in backticks elsewhere, at which point that mystery got resolved.
I had plans to unit test my SQL layer, using an in-memory database, but I couldn’t find a convenient way to do that, so I ended up leaving it without unit tests. There’s a good set of acceptance tests, so I’m not particularly worried about things breaking; for now, typing things in by hand is working fine in getting me to a state where I can run the acceptance tests. The problem with running the acceptance tests right now is that they’re mostly an all-or-nothing thing; I decided to implement the SQL glue necessary to add entries before implementing the SQL glue necessary to modify entries, and unfortunately that’s all jumbled together in the acceptance tests.
It was really a lot of fun: once I had things to a state where I was ready to write a command-line script, I spent maybe 15 minutes correcting stupid syntax problem after stupid msyntax problem istake just to get as far as issuing the first SQL command, but when I got that far, magically all sorts of things just worked, and I could go over to the mysql command line and see the data just sitting there in the table! Way cool.
I suppose I might as well share a bit of code. The string for inserting data into a table is this:
def insert_string "INSERT into `#{@name}` #{@row.fields} VALUES #{@row.values};" end
Here are the definitions of the fields
and values
methods on the row class:
def fields parenthesized_list(@values.keys) { |key| "`#{key.to_s}`" } end def values parenthesized_list(@values.values) do |value| "'#{@connection.quote(value.to_s)}'" end end
Which is nice and pretty. The definition of parenthesized_list
, though, I’m not so thrilled about:
def parenthesized_list(array)
list = "("
first = false
array.each do |element|
if (first)
list += ","
else
first = true
end
list += yield element
end
list + ")"
end
I looked through the Array
and Enumerable
interfaces, but I didn’t see any way to really improve that. Which seems odd – am I missing something? If not, I should do some refactoring: it wouldn’t surprise me if that’s the longest method in my code base right now. (When the body of a Ruby method gets above 3 lines, I usually start to get nervous…)
I’m really excited about this. I’d been putting this step off for a while, and it was by far the biggest unknown in my current work. So for me to have made a concrete step towards the SQL integration in a single not particularly long programming session was a very pleasant surprise indeed. If I can find time tomorrow (which I may or may not be able to do – we’re going to a performance of H.M.S. Pinafore), I should be able to finish off the CLI tool: the remaining step should be a little smaller than this one.
After which I’ll have to start thinking about other part of the project, namely the part that generates web pages. I suppose one big decision is whether to roll my own XML creation library or to use an existing one. The former sounds a little more fun, and will probably make it easier to generate output that matches my acceptance tests, but I certainly want to stop reinventing the wheel at some point.