Tuesday, August 18, 2009

Testing Faster

Last week someone told me how to be 8 times more productive at testing software.

No joke. Jeff Sutherland in his Agile Bazaar talk called Practical Roadmap to a Great Scrum stated:

One hour of testing time on the day a new feature is finished is as productive as a day of testing time 3 weeks later.

I stopped to think about the organization that would be measuring this productivity difference. What yardstick would a high functioning agile team use to measure this difference? What would I look at for my team, I mused, to find out if this were true for us? Data collection on this seemed like quite a hurdle, but there’s a lot of information in a bugs database if you know what you are trying to find out.

In my experience, if you simply measure test plan execution time, these figures don’t make sense. But if the metric is the cycle time from the beginning of testing to completing a feature, this is a believable statement to me.

An hour while the team is fully engaged on a project can telescope to a full day of turnaround time if someone has to be interrupted from other work to fix the bug. A bug fix that would have been done quickly if a tester had pulled over a developer and said “hey look at this” might take a week if the bug sits in a queue waiting to be assigned, or for a decision to be made in a triage meeting.

The better the process efficiency, the smaller the difference between ideal time for an operation and actual elapsed time, the more a team can deliver. Keeping testing time close to feature completion gets stories all the way to done in the iteration we want to deliver them.

1 comment:

  1. I was there (Hey, Rachel!), and maybe I can fill in the details a little. I've heard that kind of rhetoric before, and being the critical listener I am, I made them explain it.

    The best measure to use for these kinds of comparisons is not how many bugs you find per iteration, but how many bugs you *miss* per iteration. If, in iteration i+1, you find n bugs that actually existed in iteration i, then there's a problem. Maybe there's a manual test that was either insufficiently documented or insufficiently executed. Or maybe there was not test for that bug in the test plan for iteration i (iteration i+2 better have it though!). The lower n is, the better your testing is performing.

    Dave K

    ReplyDelete