[ Content | Sidebar ]

earthquake

November 1st, 2007

Now I know what an earthquake feels like. About time; I’ve been living in the Bay Area for nine years, after all.

finished converting dbcdb to ruby

October 28th, 2007

I’ve finally finished converting dbcdb from Java to Ruby. I’ve been using the Ruby version of the tool to write the database for about four months, but I’d still been using the Java version to write the web pages.

Nothing too deep going on here; I was actually done with everything but the indexes as of the middle of September, but I hadn’t gotten around to generating the indexes until this weekend. (Or do people prefer that I spell it ‘indices’?) We’ve been busy with some extra event every single weekend for about the last two months; combining that with wanting to learn Japanese, working through Metroid and Picross, and occasionally working on the game with Miranda means that, unless I’m feeling extraordinarily disciplined, dbcdb falls by the wayside. But we had nothing planned this weekend, so I seized the opportunity.

The new code is a little more than half as long; the acceptance tests also run almost twice as fast. (All that JVM startup takes time, I guess? I don’t think there are significant algorithmic performance variations in the two versions.) Go Ruby, though I’m sure it would be very easy to find situations where the performance goes the other way. Both generate the exact same output, as manifested by running the same acceptance tests on both versions and on, ultimately, doing a diff -r on both outputs from the current live database contents.

What next? There are some cosmetic tweaks I may or may not get around to making; I’m not feeling any urgency on that score right now. I had planned to next convert this from generating static web pages offline to generating them dynamically via mod_ruby; now I’m feeling distinctly less interested in that idea. (Partly because the REST book reminded me of some of the benefits of static web pages, ironically.) I still want to experiment with that at some point, but now I’m thinking I’ll just do that by coming up with a Rails project instead of doing everything from scratch.

So it looks like it might be time to declare this a success and move on. And it has been a success, no question: I’ve brushed up on my Java a bit, dabbled with SQL, learned Ruby, and basically enjoyed myself. So, from a purely didactic standpoint, I’m quite happy.

There is one thing that I’m not happy with, though. I’d originally envisioned the generated web pages as actually being useful in that they’d provide an index into my blog posts: they would give an easy way for people to find all the web pages where I write about a given game, say. And they do provide an index, but it’s not as easy as I’d like: people have to click on the link to the database and then click from there to a search link, and that’s expecting quite a bit from my readers. (Especially since there’s honestly nothing of particular interest on the database web page itself.)

So I’d like to remove one of those links, to compress it down to one level. In this AJAX-aware world, the mechanisms for doing that are pretty well-trodden: write some JavaScript to do the query in the background, and then stick the results in the database web page. And, in fact, it turns out that WordPress can generate an RSS feed of query results, so I don’t have to worry about page scraping and having details change as I upgrade my WordPress installation. (Which I should really do one of these days – I’m still on 2.0…)

One last task, then. Which is made a bit harder than it would otherwise be by the fact that I don’t know how to write JavaScript, I’m not familiar with the DOM model (if indeed that’s the right term to use), and I don’t know how to acceptance test AJAX. But I’m not particularly worried about either of these: like I said, this is a well-trodden path, so it shouldn’t be very hard to find examples that do pretty much exactly what I want to do.

agile open california: the sessions

October 26th, 2007

And now to some actual content from Agile Open California. As I mentioned before, I hosted a session called I Don’t Like Pair Programming, since the topic had been on my mind after our team meeting the previous week.

The title isn’t really accurate: I usually enjoy pair programming when I’m doing it, but my brain nonetheless persists in not looking forward to it. Which is weird—I would like to think that my brain is generally more sensible than that—so I thought I should ask for help understanding myself.

My guess is that it was the single most sparsely attended session at the conference: there were about five of us there. But I really enjoyed it, and am quite grateful to the others who showed up. The main piece of psychological advice that I picked up was contributed by Rob Myers (who learned it from the AYE Conference): it’s not that introverts don’t like people, it’s that we need time alone to recharge. Which suggests that, perhaps, if I had, say, one or two hour-and-a-half pairing sessions a day, I’d be more likely to enjoy (or less likely to feel trepidation towards) pair programming than I do in my team now, where a pair typically sticks together until either the day ends or the card ends. This would require other changes in the way my team operates, but I suspect they’re all to the good: we hog cards too much as is.

Steve Bockman also gave us the following pairing mechanics suggestion: pass the keyboard every time you write a test, get a test to pass, or refactor. Which sounds like fun to try; I don’t think that I’m bothered by keyboard hogging, but other people probably are, and I’m certainly curious what the experiment would feel like.

The other thing I wanted to talk about from the conference: for whatever reason, some discussion that we had during a session that Rob hosted on “Maintaining a Collegial Environment in a Competitive World” got me thinking about teams transitioning to agile. (No idea how the topic led to this thought; which is a sign of the strengths of the conference.) Some people (e.g. Ron Jeffries) strongly suggest that, if you’re intrigued by XP but aren’t sure it’s all a good idea, you should just do it by the book for a while, and see how it works. (As opposed to starting with the parts that make sense to you, and then maybe adding in more stuff and maybe not.)

But we can juxtapose this against two other ideas from the agile and lean worlds:

  • When dealing with legacy code, there’s a temptation to just rip it out wholesale. But that’s a good idea much less often than you’d think: it’s often much better to accept the code and gradually refactor and add tests as you need to touch it.
  • Taiichi Ohno has suggested (I believe, I’m too lazy to dig up a reference) that you shouldn’t start with a standard work specification that’s too good: it’s better to start quickly with something mediocre, among other reasons that it will be easy to find ways to improve it!

I’m not going to take either of these as suggesting that, in general, Ron is wrong: for one thing, sometimes the right thing to do with legacy code is to throw it out, and I’ve read stories from the lean world about Japanese experts coming in, making drastic changes, and getting immediate results. But what they do suggest is that, rather than focusing on getting the details of the situation right immediately, better to focus on how to improve the situation in a disciplined matter. In other words, refactoring supported by testing in the code world, continuous improvement via formal experiments in the process world.

netapp countersuit

October 25th, 2007

My favorite bit from my employer’s counterclaim in the NetApp case:

COMPLAINT PARAGRAPH 3:

NetApp is a pioneer in the design of data storage systems marketed throughout the United States and abroad and continues to innovate new advances in data storage technology. NetApp’s patents cover a host of advanced features found in NetApp’s award-winning Data ONTAP storage operating system and Write Anywhere File Layout (WAFL) filesystem. These include fundamental developments in filesystems, data consistency, data integrity, storage management, write allocation, read-only data images (SnapshotsTM), writeable clones, copy-on-write, RAID arrays and assimilation, and file system image transfer. NetApp’s patented features are demanded by customers the world over because they greatly enhance the performance, reliability and ease of use of data storage systems.

ANSWER TO COMPLAINT PARAGRAPH 3:

Sun denies the allegations of the first, third and fourth sentences of paragraph 3 of the Complaint.

I’m not sure we come off completely smelling like roses in this suit, either, but if we’re going to have a patent lawsuit, this is the right thing to do with the gains:

I am committing that Sun will donate half of those proceeds to the leading institutions promoting free software and patent reform (in specific, The Software Freedom Law Center and the Peer to Patent initiative), and to the legal defense of free software innovators. We will continue to fund the aggressive reexamination of spurious patents used against the community (which we’ve been doing behind the scenes on behalf of several open source innovators). Whatever’s left over will fuel a venture fund fostering innovation in the free software community.

Some reading on the subject:

agile open california: the mechanics

October 24th, 2007

I spent Monday and Tuesday at the first Agile Open California. I learned several things there, which I hope I’ll find time to blog about over the next couple of days, but I want to start by talking about the format.

Actually before getting into the format, I want to talk about the setting. It was at Fort Mason, which is right on the northern edge of San Francisco, and took place during what must have been the two most delightful days of weather in S.F. that entire year: sunny and right around 80 degrees both days, either of which is a rather improbable event. The conference started with me sitting in a large room with half of my attention out the window looking at the sailboats in the marina and the Golden Gate bridge behind them. On the second day, I discovered the delights of spending free time on an outside stairs/fire escape on the northern side of the building: nice and warm, with a bit of a breeze, and while I couldn’t see the bridge and the marina from there, I could look straight across the bay to the northern shore.

So: hard to beat the setting. As to the format: it was the first time I’d gone to an all-open space conference. No talks prepared in advance: we spent the first thirty minutes or so coming up with the schedule, with people just writing a topic on a piece of paper, standing up and explaining it, and putting it on a schedule grid on the wall. Much to my surprise—I had zero plans of doing this going in—I turned out to be one of those people, because I had something on my mind after last week’s team meeting. (My topic being “I Don’t Like Pair Programming”; I’ll leave that to another blog post, other than to say that the title isn’t really accurate.) At the end of this, we had a schedule that averaged around three sessions at any given time, with me always at least somewhat interested in one of the sessions in any given slot and, as often as not, interested enough in two of them to wish I could be in two places at once. Basically, while hardly a stunning revelation, it was about as good a conference schedule as I’ve seen.

Which brings us to another aspect of open space: while you can’t actually be in two places at once, you are explicitly given your blessing to flit from session to session to follow your interest. I really didn’t do this, though: I stayed in a session from start to finish (except one time when I left early and went out to the aforementioned stairs to hang out and read a book), and pretty much everybody else seemed to do the same. (There may have been a contingent of people who didn’t really go to the sessions at all and just hung out and chatted; I’m not sure.) Part of this was that I was enjoying the sessions, part of this was that the sessions led to intricate enough conversations that I’m not sure I would have gotten a lot out of jumping into the middle of them, and part of this was that there weren’t any times where there were two sessions that both looked really interesting and where the one I went to first proved to be uninteresting.

So: the format doesn’t provide a miracle solution to the “be in two places at once” problem. Which is too bad, but what took me until the second day to realize was that it does provide a solution to another problem. Namely: at some point during grad school, I developed an allergy to lectures. I like learning about things by listening to other people, but I want a chance to talk back, to express all the thoughts that they spark, and to do this with more than one person at once. Which I got to do that for a couple of days, and it was great! Sure, some of the discussions were sort of meh, but some of them were completely fascinating, and I found myself having sudden revelations out of the blue, which I could express immediately, get feedback on, and refine in real time.

Honestly, I might now be spoiled for normal conferences. I won’t propose this as the miracle solution for everything: for one thing, not everybody is as allergic to lectures or likes talking as much as I do, and for another thing it probably works better in a setting where intermediate or advanced practitioners are trying to refine knowledge than, say, a setting where people novice practitioners are trying to acquire knowledge. (At least, that’s what I’d guess, but who knows.) And, while I generally prefer to get my canned knowledge in book or other written form, I’ve been to some wonderful talks in my time. But I tentatively think that this is now my new default best practice for running conferences, and any other format has to justify itself.

One other unusual thing about the conference: the makeup of the audience was rather different from what I’m used to in a setting discussing a technical topic. (Which makes sense, actually: agile focuses a lot on interpersonal matters, and open space probably draws more eccentrics than the norm.) In the closing session, I counted 17 men and 13 women; not all the attendees stayed through that session, but I don’t think the gender ration of the stragglers was particularly different from the rest of the conference. Also, I felt like a minority in either my role as a programmer or my role as a person manager: in particular, there were both more consultants and more product/project managers than I’m used to spending time with. Which is great: always good to be exposed to new viewpoints, they certainly had interesting things to say, and the consultants were people who clearly had a lot of valuable knowledge/experience and whom I would be happy to bring into a project that I was working on.

So: sign me up for next year! (Even if the conference can’t guarantee the same weather again.) Who knows, maybe I’ll even gather up a bit of foolhardiness and volunteer to help organize.

magic flute

October 22nd, 2007

We took Miranda to her first opera yesterday, a performance of Die Zauberflöte. The costumes were absolutely stunning; they were designed by Gerald Scarfe, had gorgeous vibrant colours, were very inventive and amusing, and a pleasure in every way to watch. (I wish I could find some good photos online, but my searching didn’t turn up anything that did them justice.) A bit too amusing, actually – in a few places, the audience’s laughter got in the way of the music – but I’ll take that.

The performance itself was good, if not breathtaking. Not that I’m in the best position to speak on the matter – my collection of opera CD’s is rather heavily weighted towards the 20th century, Benjamin Britten in particular. (I can also be found singing snippets of The Mother of Us All not infrequently; why didn’t I go see that when the SF Opera performed it four or five years ago?) Actually, my CD collection is notably light on Mozart in general; I love his Requiem, but I can take or leave most of the rest of his work. (I have been known to enjoy playing his piano sonatas at times.) I’m certainly glad we went; Miranda enjoyed herself, and she wasn’t alone in that. Not sure we’ll make a habit of opera, but we really should start going to orchestra concerts. How is the San Francisco Symphony? (Hmm, they did Alexander Nevsky this weekend.) Are there any decent orchestra options down the peninsula?

go rockies

October 22nd, 2007

My brain would seem to be rooting for the Rockies now. Which surprises me, given that I lived in Boston for nine years; I guess this is what happens when the Red Sox have recently won a World Series, have too much money, beat the Indians, and are up against a team that’s been an entertaining surprise.

cd baby now selling mp3s

October 19th, 2007

I just discovered that CD Baby is now selling mp3s. Nice to see mp3 purchase options opening up like this; also, I like both the fact that they list up front how much of the sale price goes to the artist and that the amount is 91%.

Of course, discovering this wasn’t enough to get me to buy Test Drive Songs from them instead of from Amazon. Not sure why; maybe because I was already planning to buy it from Amazon, maybe because of the price, maybe because of the Amazon iTunes integration.

micah owings

October 11th, 2007

I had not realized until listening to the radio today that the Diamondbacks’ best hitter is, in fact a pitcher. Only 60 ABs, but still: a .333 average, .683 slugging, 1.032 OPS isn’t shabby at all. The announcers were saying that he’d gone 4-4 twice this season, once with 2 HRs and the other time with 3 doubles.

life-long learners my ass

October 10th, 2007

I got a look at my school district’s new report card. Most of the items are now grouped under the heading “Lifelong Learning Skills”; specifically, the group contains the following entries:

  • Listens in class
  • Follows directions
  • Works independently
  • Works neatly
  • Completes work on time
  • Accept [sic] responsibility
  • Respects classmates
  • Respects authority
  • Uses time wisely
  • Communicates effectively
  • Works collaboratively

A quiz for my gentle readers (or, even better, my snarky readers): which of these items

  1. Support life-long learning?
  2. Actively work against life-long learning?
  3. Are neutral towards life-long learning?
  4. Could be interpreted in ways that either support or hinder life-long learning, but guess which way teachers are going to interpret them?

random links: october 6, 2007

October 6th, 2007

steve yegge is two for two

October 6th, 2007

Following Steve Yegge’s recommendation, we just finished watching Last Exile; it, like Haibane Renmei, is excellent. It took a little longer to get into the story this time, but somehow we slipped from “hmm, pretty interesting, nice mix of computerized and hand-drawn graphics” to “just how many days to we have to wait until the next DVD shows up, and why did we stick a regular movie in our queue instead of restricting ourselves to episodes of this series, anyways?” (The movie was History Boys, which I actually also recommend, just not when you’re in the middle of this series.)

Now that I think about it, both series do have some elements in common. Both set in a future world, where most of the technology feels like a not too distant (100-year old?) European style, but there are interjections of advanced technology controlled by mysterious forces. (More of that in this one than in Haibane.) Transcendence that isn’t particularly well explained, or really explained at all. (More of that in Haibane.) Hmm, maybe they don’t have much in comon after all; a lot more action here, more of an epic scale, more gizmos, somewhat more explanations.

We’ll take a break from his list now: Liesl’s dad gave her a few volumes of Slings & Arrows for her birthday, so that will be our series to spend time with for the time being. But I’m looking forward to getting back to his list: recommendations of that quality are a gift to cherish.

restful music stores

October 1st, 2007

One advantage Amazon’s new mp3 store has over iTunes: if there’s a song I like, I can just link to it. I believe that the iTunes store is addressible via URLs, but it’s not the same: the URL isn’t sitting there at the top of my browser window, and even if it were, I couldn’t count on my readers being able to do anything with it.

Not that Amazon’s store is perfect: I finished that sentence without finding enough songs that I wanted to link to. (Which is pretty pathetic, given its length.) But at least I now have a source for Herbert Grönemeyer’s music in the U.S.! And it could have been worse: a grand total of one of them (Herr Grönemeyer noch mal) is available sans DRM on the iTunes store.

Some of the latter, no doubt, is due to business negotiations that my poor little brain can’t understand, but much of it is due to the fact that Apple, for its own mysterious reasons, apparently doesn’t sell DRM-free versions of independent music. Which, in turn, I’ve been listening to more over the last couple of years because, when I was looking for new music podcasts, I stumbled upon one that only plays independent music, I’m sure at least partly because the major labels don’t give them any other legal options.

Go addressability; go accessibility.

throw everything at the language and see what sticks

September 29th, 2007

I think I’ve mentioned this before, but learning Japanese continues to increase my sympathy towards kids who are learning to read and misread words in ways which seem inconceivable to me. My brain is pretty much incapable of looking at a word in English and not reading it immediately; the same is far from true in Japanese. For example, one of my vocabulary cards has a character written on the front, and the readings shutsu, desu, and deru on the back. (With their meetings.) At least that’s what I thought was written on the back for several days, until I took a closer book, and noticed that the second reading was dasu, not desu. Oops. I mean, it’s not like da and de even look similar, I simply wasn’t paying attention, and my brain isn’t yet wired to read correctly when I’m not paying attention.

I started off studying the language with the help of JapanesePod101 and a textbook (Japanese for Today). Then I added Read Japanese Today, which I continue to think is an excellent way to learn kanji. I’d also been using Kanji & Kana as a reference book, so I got my stroke order right when writing characters for vocabulary cards; over the last few month, however, I found myself browsing through it more often in odd moments.

It’s a book I’ve had around since the last time I tried to learn the language. It contains the government-approved list of 1945 basic kanji, showing how to write each, giving the various readings and meanings, as well as a few compounds in which they appear. And does so in an order based more or less on how important they are. A great book to have around, if you want to immerse yourself in the basic kanji; last decade, I tried to go through the book and memorize the kanji in order.

But I went too far with the book. At one point, I could go through the first 200 characters or so, and write them down in the order given in the book, with the proper stroke order. Which is a very seductive thing to do: it gives you something to practice if you just have some spare time, or are falling asleep at night, or whatever. The problem is that my memorizing of my strokes got ahead of my memorizing of the readings and the meanings, so things got unbalanced.

Because of my bad experience, I stayed away from doing the same thing this time. But then I glanced through the start of the book and realized that I claimed to know most of the characters on the first few pages. So what’s the harm in memorizing the order in the book, and reviewing the strokes in my head?

Thinking about it more, I think that, not only isn’t there harm, there’s virtue in it. If I claim I know a character, even if I’m only interested in reading the language rather than writing it, I have to be able to recognize it completely reliably; given the number of characters that look similar, in practice I can’t claim that unless I could write the character. But vocabulary cards, by their nature, don’t give me practice in writing characters. So I have to find another way to practice writing them; memorizing them in the order in that book is as good a way to practice that as I can think of.

Having said that, I don’t want to forget what happened last time. I think/hope I’m doing a better job of managing my learning; the key here is to not have my memorizing how to write the characters get ahead of my memorizing their readings/meanings. If I do that, I’ll be okay.

The other book I’m reading right now is Japanese the Manga Way. It’s a relatively informal grammar of the language, with examples taken from manga. Which works well: besides being fun, manga gives a natural source of language examples that are closer to regular spoken Japanese than other written examples would be.

Other things I like about the book: for one, I can occasionally figure out what the examples are saying, kanji and all, before reading the explanations. And, for another thing, it presents the grammatical points in a rather different order than other sources that I’m using. (Perhaps because it isn’t constrained by having examples only use material that has been previously introduced.). I like seeing another lens on the language, and one which is perhaps a bit more coherent than others I have access to, one which is less intent on mapping the grammar to concepts in English.

The other thing I’ve been doing is watching (the excellent) Last Exile in Japanese with subtitles; again, nice to occasionally be able to figure out by myself what people are saying. Don’t get me wrong, the vast majority of the time I depend very much on the subtitles, but I’m starting to get the feeling that it might really stick this time.

Or maybe I’ll burn out in another couple of months! Always a possibility…

amazon mp3s

September 25th, 2007

I bought my first mp3s online today; I am very happy to report that I didn’t do it through Apple. I don’t plan to make a habit of it – CDs have served me well for the last 21 years, and I see no reason to stop buying them now – but “And Try” by Ten Days Till is an excellent song, and is unavailable on CD. And is unavailable without DRM from iTunes, and I didn’t feel like signing up for a subscription service just to get it. Amazon, however, has it available without encryption; Amazon now also has a bit more of my money. (Though not very much in the grand scheme of my purchases through them…)

mad at apple

September 25th, 2007

I was quite impressed by Apple’s recent iPod announcements. Most companies, I think, would have been coasting for some time if they had as dominant a product as the iPod. Apple, however, is continuing to push ahead with a constant stream of improvements ranging from the subtle to the groundbreaking. Which is awesome: what I want is for companies to make the best products in the world and then figure out how to make them even better, to continue to open up new possibilities. There were some things that made me wonder – in particular, the way they handled ringtones was so stunningly anti-consumer that I had to assume there was some sort of behind-the-scenes negotiation that caused it to make sense somehow – but all in all I thought it was great.

I started to wonder, though, when I read that their new user interface was quite a bit slower than their old one. I’m all for user interface improvements, but eye candy and UI improvements aren’t the same thing at all, and I’m not nearly as big a fan of Cover Flow as Steve Jobs is. Don’t get me wrong, I’m not against a bit of eye candy, but not at the expense of taking 41 seconds to boot a frigging MP3 player. So maybe I was too rash to assume that they’re doing a constant stream of improvements: maybe this is the sign of the design starting to go off the rails?

And then I learned that Apple apparently is trying to prevent users from syncing their iPods without using iTunes. Which really hit a nerve with me, for reasons I don’t entirely understand. (I don’t have the same emotional reaction to, say, Nintendo trying to prevent me from using their hardware with disks/cartidges that they didn’t press/manufacture.) I think it’s partly a reaction to their going from a somewhat open platform to a closed platform, and partly the narrow concrete effects of their actions. A de-facto monopolistic tie of having 99% of iPod users use iTunes isn’t good enough: they have to make it very clear to us that they’re not seeking to become a monopoly as an accidental outcome of making the best product out there, they’re seeking to become a monopoly because they want power and they want us to know it.

This is where I’m supposed to say that I’ll never buy another iPod again. The truth is, though, that I care enough about having a good interface to listen to music and podcasts that I’m not at all comfortable with saying that. I am, however, actively rooting for them to get their asses handed to them in European courtrooms.

game pictures

September 19th, 2007

Apologies for my recent silence; the cause is a combination of watching movies (well, DVDs, mostly Last Exile) and being pretty busy last weekend. But now I am, for once, caught up with my other odds and ends (i.e. reading blogs) early enough at night to actually be able to write something.

As I mentioned before, Miranda seems to have gotten serious about the idea of us writing a video game. And we actually have been spending some time on it over the last month, mostly at her prodding. So far, I’ve mostly been playing around with programming, while she draws pictures in a notebook. I’d been using rubygame as a programming framework, and I still might stick with it, but it doesn’t have support for sprites at different depths; this is a problem if, say, you want to have a character walk behind a tree. So now I’m thinking I’ll go with gosu: not much documentation yet, but it seems to be able to do what I want, its sample game is extremely short yet fully functional, and when I was poking around its web site, I saw several pages that showed signs of having been edited within the last hour. All good stuff.

So, right now, I’m trying to find time to convert my rubygame spike into a gosu spike; assuming it goes well, I think I’ll go with gosu. But what should Miranda do while I’m doing my programming?

She’s drawn lots of neat pictures, and I’m sure she could profitably continue along those lines for quite some time. But, if you’re doing things incrementally, you want something functional crossing all layers as soon as possible; by now, my programming is coming along well enough that I could imagine using a picture of hers, and she has drawings to give me. So the only thing stopping us from putting the two together (other than that I’m switching development frameworks!) is that I don’t know how to get her pictures in the game!

Given that, the next step is clear: rather than puttering around with game libraries, I should face up to my fears and attack that problem head-on. So when Miranda asked me this morning if we could work on the game this evening, I decided we should start on digitizing her pictures. Fortunately, my brother was kind enough to give us an all-in-one printer/scanner/copier/fax doohicky last Christmas; time to break in the scanner functionality. Which we did, giving us an electronic copy of one of her designs.

Next, a graphics editor: at the very least, we need the backgrounds of her images to be transparent instead of white. I’d considered and mostly rejected Pixen earlier, but hadn’t found anything better in the interim, so I decided to give that a try. Somewhere either from Scott McCloud or Penny Arcade I’d gotten the idea that the proper technique is to take a scanned-in drawing, add a transparent layer on top, re-ink and color the drawing on the new layer, and then hide the original drawing. Which took us half an hour or so to figure out, both of us being new to the software and ignorant about the details of the process, but ended up working out just fine. So the result is that two black-and-white pencil drawings have turned into colored PNG files with transparent backgrounds; I should be able to just stick them into the game (maybe doing a bit of resizing first) and see how they look. Which will be very exciting!

Watching her do this has also gotten me more convinced of the merits of graphics tablets: she was happy to ink in the lines with the touchpad, but I’m sure it would have been much easier with a tablet. I’m not going to go out and buy one immediately, but she’s sticking with the project well enough that my worries that she would lose interest in a graphics tablet are quickly diminishing. (She’s also spent a lot of time playing around with SketchUp over the last few months, incidentally.)

A fun way to spend the hour between getting home and starting dinner.

a pox on both their houses

September 9th, 2007

On Thursday evening, I tried to log in from home. My computer got an IP address, but I couldn’t connect to any external web pages.

I fired up a terminal, and did some name lookups; that worked. At least more or less – it gave me an address, but also said something about not getting a response from other name servers. I did some tracerouting to make sure I could reach the host I tried (the server hosting this blog); it seemed to be getting there, so why couldn’t I log into it, or anywhere else?

I looked at the IP address again; it didn’t seem familiar. On a lark, I tried to resolve www.google.com, and got the same IP address. And a reverse DNS lookup on the address claimed that it was assigned to Comcast. I had some other DNS servers around, and tried them; I think (but can’t remember for sure) that they actually gave me a correct address, but a traceroute to that correct address failed.

I rebooted the router and cable modem; no dice. Clearly a Comcast problem; it was late at night, they are doubtless fixing it, I went to bed.

Bad assumption, it turned out: it was a Comcast problem, but it lasted into the next day, and, when I called them, they were blissfully unaware of the situation. It seems that, for no apparent reason, they’d somehow unregistered my cable modem (which I was renting from them!), and the tech I talked to couldn’t reregister it for some reason. Him telling me that it probably wouldn’t take more than 48 hours to fix didn’t exactly lighten my mood.

Next morning, I tried again; it still didn’t work. At least if I went through my router – if I plugged in my computer directly into the modem, it worked fine. So maybe they reregistered my cable modem but screwed up something else? Of course, it’s possible that my router decided to break at exactly the same time as Comcast screwed things up – it hasn’t exactly been a paragon of stability – but it seemed unlikely. Still, I spent a while getting myself very familiar with the router’s administrative interface (its idea of “diagnosis” turns out to be an option to let me either ping or traceroute), and even reset it to its factory settings; no dice.

So I got on the phone to Comcast. They, of course, tried to blame my router. I talked to the first person’s manager; she insisted that Comcast wasn’t even capable of filtering based on MAC addresses. So: maybe it really is a coincidence? Time to get on the phone to Linksys, I guess.

And I did; it was a good thing that I had a copy of Picross at hand to amuse myself during the wait times. During which time, they made me listen to stuff that I found actively insulting. Does anybody really think that telling me that their support is allegedly “award winning” is going to make me feel happy to be on hold? The only mental model in which that makes sense is if they think that people will be happy to be on hold because they’ll be reassured that at least the service at the end is worth waiting for. This might make Martians feel better – I wouldn’t know – but, for us humans, it doesn’t work that way. All they’re doing is showing that they realize that they’re treating their customers like crap by forcing them to wait that long, and are even more clueless than normal about what to do about it.

I eventually got off hold and talked to somebody. Which was a bit of an adventure: even though she had my serial number, she wanted me to also read her a version number, which was nowhere to be found on the bottom of my router. She wanted to know what kinds of computers I was using; after refusing to answer the first time, I let slip that I was using a Mac. Which, it turns out is unsupported; after I pointed out to her that it was their god-damn router that couldn’t communicate to the cable modem, and that I was using their own administrative web interface, she relented and agreed that she could maybe provide support for the web interface.

At which point, she actually had useful suggestions: go to the Setup tab, go to the sub-tab about managing MAC addresses, and tell the router to clone my mac’s MAC address. Why didn’t I think of that myself? Well, because I didn’t see the sub-tab on the interface! But it was such a good idea that I pulled up the raw source of the web page, figured out what to type in on the address bar to get to that sub-tab. No “clone MAC address” button to be found, but there was a form to enter a MAC address by hand; an “ifconfig en0” later, I had it in hand and was soon happily web surfing.

So Comcast can’t filter on MAC addresses, eh? Looks like bullshit to me. Or maybe not – at this point, I thought of rebooting the cable modem (which I’d done the previous evening but not that morning); after that, I could get online with the router broadcasting its own MAC address. Admittedly, I should have thought of doing that myself – I’ve seen that solve problems before – but Comcast should also have been aware of that failure mode and told me themselves to do that.

How do normal people deal with this? I’m annoyed at Comcast, but their initial phone support wasn’t too bad. But I don’t think that normal people will be obstinate enough to make it through the rest of the solution chain, or be geeky enough to read raw HTML and figure out MAC addresses if that’s what they have to do to get the router to behave. As is, I was very close to buying a new router because of this, when the router wasn’t the problem at all.

So: the scorecard. I’m mad at Comcast for screwing up my access, for not being aware of it, for not being able to fix it quickly, for not diagnosing the second problem, for shedding blame. I’m mad at Linksys for building an unreliable router, for long wait times, for insulting recorded messages, for trying to refuse to support their own product. I’m not thrilled at Scientific Atlanta, because I had to reboot the cable modem; I’m not thrilled with Apple, because it’s entirely possible that the web interface problem was a Safari bug instead of a Linksys bug. (I haven’t looked at the web page in detail.)

Grr. At least it’s over with. The rest of the weekend has been quite pleasant, at least…

two music sequencer toys

September 2nd, 2007

I ran across a couple of video demos of interesting music hardware recently. Both are basically sequencers with unusual user interfaces:

First, Tenori-On. (Found via GayGamer):

And Reactable. (Found via Lost Garden, which throws in some neat ideas of its own.)

I don’t have much to add; I’m curious how they work in practice. Especially Tenori-On: it seems to have a more limited set of choices than Reactable, but the output is also far more interesting to listen to / watch. Is it really that easy to produce good-sounding music from it, or is the video just the result of somebody who knows the device inside and out?

Incidentally, one side effect of my going through tons of others’ posts about videos is that it’s now clear that I prefer embedded videos to being requested to click on a link to get through a video; I’ll switch to embedding videos myself whenever possible, on those few occasions when I want to refer to one.

unexpected benefits of tagging

September 2nd, 2007

As I mentioned before, I’ve started tagging my saved items in Google Reader. I did this partly because of a general worry about the saved items getting out of control, but also because there were three specific categories of saved items that I was afraid were getting buried: items that I wanted to read but didn’t have enough time/focus to read right then, items that I’d commented on and wanted to read others’ comments on later, and items that I wanted to blog about in the future. I had another ten or so categories that I came up with, but I didn’t seriously expect to get through the items in them: their purpose was to make it clear that I had 60 or 80 or whatever videos saved up to watch, that I was clearly accumulating more faster than I was watching them, and I should just delete them now.

The three short-term tags have served their purpose quite well; I’m definitely glad I took up tagging for that reason alone. What was unexpected, however, was an unexpected benefit I’ve gotten from the other categories. (Or at least an unexpected side effect – it’s not clear that my spending more time web surfing should be categorized as a benefit.) Namely: when I was finished reading through my normal feeds and didn’t feel like doing something else, I started going through my saved video items. (Because that was the tag that I was accumulating the most new stuff at the time.) And what I found was that it actually wasn’t hard to go through the videos faster than I was accumulating them.

When I see a blog post with a video, my mind had been thinking “that will put a dent in my blog reading time”. And it is true that watching a video takes longer than reading a normal blog post. But it doesn’t take that much longer: most of the time, I stop watching after 30 seconds or so, and most of the rest of the time it takes less than 5 minutes to watch the whole thing. (And curses to people who embed videos in a way that doesn’t show how long they are.) So it’s not that hard to go through 10 or 15 or 20 of them in half an hour; after doing that a few times, most of the category is cleared out.

It’s not completely cleared out: there are still 23 items, typically ones that will take a while to watch but that, I suspect, are worth it. Of those 23 items, however, a grand total of one of them is newer than my blog post announcing the advent of these queues. So I’m managing to keep the queues quite well under control.

Or at least that queue: if I’m concentrating on clearing out videos, that probably means that other queues are building up? To some extent, that was the case, so next I turned myself to the queue of flash games. Which, fortunately, hasn’t been building up at a fast pace recently – Game | Life hasn’t been writing about flash games very often recently – but there were still a lot that had built up. (Incidentally, if you’re looking for a flash game to play, check out the Game | Life logo!)

Flash games are potentially a worse problem than videos – most of the time, you know how long a video will take, but who knows how long it will take to evaluate a flash game? It turns out, however, that the answer is “not very long”: in most cases, it only takes a minute of play time for me to decide that I have something better to do. So now the queue is down to 15 games, of which only 2 are relatively new entries to the queue. (Most of the stragglers are adventure games.)

The flash games queue was actually rather disappointing: I enjoyed watching many of my saved videos, but I didn’t enjoy playing almost any of the saved games. I would like to think that there are good flash games out there that I’m missing, games that are equal in quality (if not duration or production values) to good commercial games, but I’m just not seeing it: there is currently only one flash game author whom I particularly like. (I should blog more about his games one of these days.)

So: two long queues attempted, two successes. Next, I turned to the category “many-links”, of blog posts referring to lots of other pages. The same story as before: yes, it takes longer to read such a post as a normal blog post. (To be specific, if such a post has N links, it takes about N times as long!) But it’s not an unmanageable amount of time, or anything: I’m still going through this category, so I have 43 items saved, but none of them are new, and I see no reason why I shouldn’t be able to get this category down to 0 items without too much work.

This is the one place where I’m using multiple tags. Say that, for example, I think the second link in one of these posts is worth blogging about. When that happens, I’ll replace the “many-links” tag by the “blog” tag. But that’s not good enough – it might take a month for me to have enough bloggable items saved up to make a post, and by then I’ll have forgotten which one I wanted to blog about. I could add a more specific tag, but that will screw up tag completion and such. What I’ve decided to do is to tag the post with both “blog” and a number (e.g. “blog, 2”), where that number is the number of the link that I want to blog about.

I still have a ways to go (I currently have a total of 194 tagged items, while ideally I’d reach a steady state of under 10), but the contours seem clear by now: once I break things into categories, the saved items start dwindling. I’m actually curious if the categories themselves matter: would I have the same effect if the tags I used were just the days of the week? Not entirely clear: maybe I learn something about how to efficiently process video posts by focusing on them for a little while, but maybe not. It may well be the case that some of the tags will prove resistant to this process: in particular, I’m worried about the “long” tag. I doubt it, though: my bet is that I’ll be down to 50 items in another couple of months, and will be down to 10 items in half a year.

Incidentally, Google has fixed one of the UI flaws that I whined about before: they now do tag completion based on the start of the tag, instead of completing from the middle of the tag. But they still insist on defaulting to showing me unread tagged items, which continues to make no sense to me.