You are viewing an old revision of this post, from June 5, 2010 @ 21:03:09. See below for differences between this version and the current revision.

As I mentioned earlier, I’ve been having a lot of fun at work playing around with JavaScript and CSS. But there is one area where I’ve been failing miserably: for the first time in years, I’m writing very few tests while programming.

I’m working on a legacy code base, with all that entails, but that hasn’t stopped me in the past. So why am I not writing as many tests this time? For that matter, is it possible that it’s correct for me to not be writing so many tests?

There is, I think, a good reason for some of this. A lot of what I’m doing is arranging graphical elements so they look good on screen. And tests wouldn’t help with that: they’re great for getting logic right, but with presentation, all they’d do is pin down unimportant details. So if I later decide that a margin should be seven pixels wide instead of five, having a test would be pure overhead. (Incidentally, placing images properly is another thing that I’m surprised how much I enjoy—I really like participating in a process that leads to a much better-looking page, even though the art team is doing most of the hard work.)

But some of my lack of testing stems from unfamiliarity with unit testing in JavaScript. I’d done my due diligence, and decided that JsTestDriver is a pretty good unit testing framework: it lets me kick off unit tests in multiple browsers simultaneously from the command line, so I can make sure my code runs in production environments while not having to leave my IDE.

So I wrote a few JsTestDriver tests as a proof of concept. At which point I ran into my first roadblock: without trying to do anything tricky, the first four tests that I wrote all failed in Internet Explorer! Which was useful information—I learned that querying CSS properties is not likely to give the results I expect in IE—but the functionality in question actually worked fine in IE, I was doing the querying to check results in tests, not as part of the product code.

Since then, I’ve written a few more unit tests, when I didn’t understand why a piece of JavaScript logic wasn’t doing what I expected, and it has saved me time on those occasions. But I haven’t even begun to get into a good TDD rhythm at work.

So I decided to experiment at home to understand how much of this had to do with JavaScript, how much of it had to do with my ignorance, how much of it had to do with the kind of layout-heavy work that I’d been doing, how much had to do with legacy code, how much had to do with Internet Explorer. (Which I’m certainly not worrying about at home!) Specifically, I wrote an animation of moving boxes; there’s enough layout in that code for it to not be too unrepresentative of front-end work, but it’s also not about getting images positioned in the most attractive locations. (In fact, there aren’t any images at all, as will be obvious if you take a look; none of this new HTML5 stuff, either, the animation is all done by manipulating CSS positions and dimensions on a timer.)

The good news: I could, indeed, get into a TDD rhythm. Looking at the code right now, I have 374 lines of product code and 294 lines of test code; that amount of test code looks a little small (or, actually, given what the animation does, that amount of product code looks a little high…), but it’s not a ridiculously low proportion of test code.

More good news: a couple of nice generic abstractions popped out of the code as I was writing it (including one which got better as I added more functionality), and there are a few more abstractions that are specific to the animation that are waiting to be teased out of it. So it’s helped me improve my feel for good JavaScript code. (Though I certainly have a ways to go there…)

Indifferent news: I’m still not sure that all the tests are really getting at the core of what I’m doing. A fair amount of the thought in the programming exercise involved fiddling with CSS classes and creating HTML elements to give the illusion that you have a single box that is stacked up, then moving, then entering the stack again, while behind the scenes there are actually two HTML elements involved. (One positioned in the normal document flow, one positioned absolutely.) And the tests for that part of the code have too little to do with the desired visual effect and too much to do with implementation details. (Though not my all unit tests involving animations were bad: e.g. the unit tests for the movement of the element once I’d lifted it out of the normal document flow were perfectly reasonable.)

And then the bad news: I’m used to unit tests that run in isolation and that I can feel confident are testing something. (Though, being a fallible programmer, they may not always be testing what I intend!) And, it turns out, JsTestDriver fails on both counts. (I don’t actually think that’s JsTestDriver’s fault, I think it has more to do with the way JavaScript implementations work.)

I typically was running my tests in both Safari and Firefox. And, on more than one occasion, a test would just fail repeatedly in one of the browsers, for no apparent reason. The first couple of times this happened, I would curse the lack of cross-browser compatibility, check that the product code behaved as I expected, and reluctantly proceed. But then I noticed that, if I actually killed the browser where the tests were passing and restarted it, then the tests would start working again! So it seems to be the case that it’s rather difficult for JsTestDriver to completely wipe out its state from test run to test run. (I’m not sure if I tested just closing the window where JsTestDriver was running instead of exiting the browser completely; so maybe unit test frameworks where you have to manually reload a page in every browser each run wouldn’t suffer from this problem.)

Worse, there were times where I would make a syntax error and a whole suite of tests would silently stop running in one (or both?) of the browsers. That’s really disturbing: I expect my code to either try to run or give me some indication that it might not be running, but that didn’t always happen. I wasn’t in the habit of having Firebug open in the web browser to see if I get errors there; next time this happens, I’ll do that, but I shouldn’t have to, given that many JavaScript errors did make it out to the command line where I was invoking JsTestDriver. I’ve never had something like this happen to me before: I could chalk it up to dynamic language weirdness, but Ruby is pretty permissive and I’ve never seen anything like that in my Ruby tests.

An interesting experience; I definitely want to spend more time playing around with JavaScript at home. (Anybody have JavaScript code they want me to write?) I’m less sure how I’ll apply this at work; though certainly, when I look for smaller classes to extract, I should make them testable and actually add tests for them. But I’m having enough fun (and being useful enough) working on presentation issues that I’m not going to stress if I don’t spend a lot of time doing that.


James Shore has also blogged recently about his experiences with test-driving JavaScript. And, though he was using a different unit testing framework (QUnit instead of JsTestDriver), he ran into the same problem that I did of tests silently failing to run. (His reaction: “The mind boggles.”)

Post Revisions:

Changes:

There are no differences between the June 5, 2010 @ 21:03:09 revision and the current revision. (Maybe only post meta information was changed.)