Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Thursday, September 17, 2009

Getting Started

If you're at IMVU, the day 1 goal for a newly hired developer is to turn the crank through the entire build and release process, getting a bug fix into deployed software. (This is not a post about continuous deployment, although that's a fascinating topic in and of itself. And for those who are skeptical that such a process can coexist with a useful or valuable product, there's always Flickr.)

In organizations I have experience with, time-to-first-deployed-bug-fix is not a useful way to look at ramping up developers within the organization. Deployment is a rare event. Even if it's every three months that's rare compared to the checkout-build-bugfix-test-checkin cycle that forms the meat of the development cycle.

In organizations that don't practice continuous deployment, time to first bug fix check-in is a similar way of measuring one element of ramping up -- how long it takes a developer to be able to turn the crank for one revolution of the process. I've seen it take a day in organizations that had a well defined build process, to over a week for a product with a hairy complicated and mostly manual install that took a lot of training -- and much longer, several weeks even, in organizations that are scared to touch their code.

I'm particularly thinking about this because I'm about to be the New Guy, starting work at a new company. How long will it take me to file the first bug or write the first test? What will that tell me about the team and technology I'm going to immerse myself in?

Tuesday, August 18, 2009

Testing Faster

Last week someone told me how to be 8 times more productive at testing software.

No joke. Jeff Sutherland in his Agile Bazaar talk called Practical Roadmap to a Great Scrum stated:

One hour of testing time on the day a new feature is finished is as productive as a day of testing time 3 weeks later.

I stopped to think about the organization that would be measuring this productivity difference. What yardstick would a high functioning agile team use to measure this difference? What would I look at for my team, I mused, to find out if this were true for us? Data collection on this seemed like quite a hurdle, but there’s a lot of information in a bugs database if you know what you are trying to find out.

In my experience, if you simply measure test plan execution time, these figures don’t make sense. But if the metric is the cycle time from the beginning of testing to completing a feature, this is a believable statement to me.

An hour while the team is fully engaged on a project can telescope to a full day of turnaround time if someone has to be interrupted from other work to fix the bug. A bug fix that would have been done quickly if a tester had pulled over a developer and said “hey look at this” might take a week if the bug sits in a queue waiting to be assigned, or for a decision to be made in a triage meeting.

The better the process efficiency, the smaller the difference between ideal time for an operation and actual elapsed time, the more a team can deliver. Keeping testing time close to feature completion gets stories all the way to done in the iteration we want to deliver them.

Wednesday, July 29, 2009

What makes a good bugtracking system?

Most bug tracking systems have built in reports that give certain views of the data -- all bugs currently in state "Open", for example. And then, it's reasonable to expect those reports to be customizeable: "all bugs currently in state Open that were created in July", for example. However sometimes we need metrics that need a little more digging around.

In a bugtracking system I recently used, it was extraordinarily difficult to create a report that would go back in history: "all bugs that have been moved to state Verified this past week" . (In this system "Verified" was not a terminal state, so there could be and usually were additional state changes after that.) Daily metrics reports of the bug database could reveal motion from one day to the next and could be aggregated to form some pretty charts. But that one missing report meant a lot of careful notetaking while I was fixing bugs so that I could report status on them myself.

For me, one of the marks of a really good bugtracking system is that I can get at the information in it in ways that the authors had not previously thought of but that turn out to be useful in the moment.