At Agile Open Northern California 2012, I led a session titled “Worries as Inventory: Bug Trackers, Lean, and GTD”. I put up my notes on the conference wiki, but I’m reproducing them here for archival purposes as well. Many thanks to the people who participated in the session, they did a wonderful job of getting something concrete out of the vague idea I had going into it.

Session participants: David Carlton (session organizer); Jeff Isenberg; Brad Neiman; Super Aaron.

We started with a brief discussion of GTD. GTD’s main point: if there’s something you’re worried that you’ll forget about or won’t do, then:

  1. Get it out of your head by writing it down.
  2. Decide explicitly whether to do it now or to not do it now.

This is similar to the ways in which some people use bug trackers (JIRA, Bugzilla, etc.): if there’s a bug, a feature request, etc., then file it in a bug tracker.

The problem with filing all of that is that it creates inventory; and, as lean teaches us, inventory has a cost. Here, it’s a cost in terms of causing people to spend time interacting with the bug tracker, worrying about the large numbers of bugs assigned to them, etc. Can we find approaches towards minimizing those costs while keeping the benefits of GTD’s approach?

GTD suggests keeping a list of things you’re not going to do. This is good – it avoids having them get in your way most of the time. But it’s bad because those items can still show up at inopportune times, e.g. when doing searches. So it could be hard to find those items when you want them but not when you don’t? We discussed filtering as a possible approach.

One way to avoid one source of excess worry here is to never assign tasks to people in advance: just leave those entries as unassigned. Create a next task queue, tell people to pull items off of that queue, and only then to assign those items to themselves. This avoids the problem of having dozens / hundreds of items assigned to individuals.

Aaron suggested having the team pull work into the sprint, creating a bucket in Jira at the start of a sprint. In his context, there’s only one team. Jeff is working in a multi-team situation; his Jira instance has a custom field to indicate the scrum team. Jeff says that PMs have a psychological need to assign a feature to somebody when coming up with it; this is okay as long as they assign that feature to a team instead of an individual.

We also talked about bugs that are found in a sprint. In general, we liked the idea of having the tester add a comment to the story bug, talk to the developer, and reopen that story bug – don’t open a new bug unless you’re making an explicit choice to defer fixing the defect until a future sprint. One antipattern that this can run into is when testers are evaluated based on the number of bugs they file; don’t do that! The goal for everybody should be to have tasks flow smoothly through the system; ideally, you’d have early QA/dev communication that leads to defects not even being introduced in the first place.

With this, we felt like we had a pretty good handle on Jira queues downstream of product management; but it can lead to PMs having hundreds of feature requests on their queues.

For the queue immediately upstream of dev, we suggested having a WIP limit (about two sprints’ worth), with a definition of ‘Ready’ required for something to be in this queue. But what about items further out than that? If they’re on a list, that presence is distracting; if they’re not on a list, that absence is also distracting!

GTD suggests that it should be on some list; whose list should it be on, and where is that list? The list should be owned by product management; we weren’t sure the list should be in Jira, maybe Jira is tactical, not strategic.

The next question: how often should we review that list? E.g. is the GTD notion of a weekly review relevant? In general, we felt that, the more distant a feature is, the less detail we should use to specify it, and the least frequently we should review it. Value stream maps could help here: we want to remove rework loops, and that applies both right before releasing stories (rework loops involving dev / QA / customer) and right at the start of the value stream (rework loops involving customers / PMs / dev).

We had the question of to what extent the weekly review is a psychological need? The tentative answer that we had was that, if you have queue review patterns that you trust, then you can relax that – so if you work on, say, 2 week iterations, then review the queue every two weeks instead of every week. (Focusing your attention on a time period twice the sprint length, 4 weeks in this example.) Though weekends are also a good subconscious reset period, so there’s something to be said for weekly reviews; reviewing more often than that probably isn’t too helpful.

And add in reviews at a slower cadence that are more strategic: think about your overall feature roadmap at a monthly or quarterly cadence, perhaps. And about your set of products quarterly or annually.

Post Revisions:

This post has not been revised since publication.