Wednesday, August 19, 2009

Write It Down Before You Go Home

I enjoy those days when I'm working side by side with the development team, and the bugs get fixed in minutes. I like having a dashboard of test results so I can see the progress as the checkins are made and our feature gets closer and closer to done.

Usually this is before we've "officially" turned the code over to the qa team. While defects are certainly getting fixed, entering bug reports in the bug tracking would just slow us down. But. Even in this ideal situation, one thing I've learned from experience is that I should not trust my memory when the day is done. The bug I'm working on at the end of the day, the one I've just figured out how to reproduce and we don't yet have a fix for -- take a few minutes and write it down in the bug tracking system.



Tuesday, August 18, 2009

Testing Faster

Last week someone told me how to be 8 times more productive at testing software.

No joke. Jeff Sutherland in his Agile Bazaar talk called Practical Roadmap to a Great Scrum stated:

One hour of testing time on the day a new feature is finished is as productive as a day of testing time 3 weeks later.

I stopped to think about the organization that would be measuring this productivity difference. What yardstick would a high functioning agile team use to measure this difference? What would I look at for my team, I mused, to find out if this were true for us? Data collection on this seemed like quite a hurdle, but there’s a lot of information in a bugs database if you know what you are trying to find out.

In my experience, if you simply measure test plan execution time, these figures don’t make sense. But if the metric is the cycle time from the beginning of testing to completing a feature, this is a believable statement to me.

An hour while the team is fully engaged on a project can telescope to a full day of turnaround time if someone has to be interrupted from other work to fix the bug. A bug fix that would have been done quickly if a tester had pulled over a developer and said “hey look at this” might take a week if the bug sits in a queue waiting to be assigned, or for a decision to be made in a triage meeting.

The better the process efficiency, the smaller the difference between ideal time for an operation and actual elapsed time, the more a team can deliver. Keeping testing time close to feature completion gets stories all the way to done in the iteration we want to deliver them.

Wednesday, August 12, 2009

Getting Good Feedback from Users

Today’s hot topic on the internet is “What’s this new Facebook Lite application?”

It looks like FaceBook jumped the gun on asking users to evaluate their new FaceBook Lite version. It wasn’t quite ready for prime time, it seems. But that’s only given people an opportunity to speculate about the purpose of the application.

The beta announcement that went to some users looked for very open-ended feedback:

We are building a faster, simpler version of Facebook that we call Facebook Lite. It’s not finished yet and we have plenty of kinks to work out, but we would love to get your feedback on what we have built so far.

I'm sure Facebook got to recapitulate every tester's nightmare about a release that escapes too soon. But there's another interesting thought here. This message has a succinct, one sentence statement of intention here. It would be interesting to know whether this is enough direction for users to give useful feedback. Would it be more useful to ask “Do you find this interface simpler to understand?” or “Do you get faster response time?”

Classically in usability studies, we go to great lengths to observe what people do with the software without prompting – with as few clues from the observer as possible. But the skilled observer is crucial here. In a usability study, that person is in a good position to interpret any difficulties with respect to the expected behavior of the user and the system together.

Untrained users are bad at accurately self-reporting their own behavior. That's why we have to train junior testers to express their observations in terms of expected versus actual behavior. I don’t expect most users to express themselves in those terms. I’d expect the vast majority of feedback to boil down to “like/don’t like” which is of limited helpfulness. User feedback with a "like/don't like" indication and some reason why they feel that way can be quite useful though, especially when aggregated over a large set of users. The folks at Facebook will have the opportunity to collect the data about the expectations of the users and how Facebook Lite matches to their actual requirements.

Thursday, August 6, 2009

Breaking the cycle of interrupted iterations

Tuesday, I posted about handling interrupted iterations. I've experienced agile teams getting into a pattern of interrupted iterations, and in that case, a more systemic situation needs to be put in place.

In some cases, even when we don’t know what the customer issue is going to be, we are fairly certain that something is going to come up. If we have the historical data to predict a certain rate of customer issues, a workable strategy is dedicate a resource. A person on the team is dedicated to dealing with customer issues, and does not participate in the normal task signup. It’s best to rotate the customer support job among the team members so no one becomes too out of date with current product development.

Sometimes even dedicating a resource does not work. If the person in the role does not have the background to do the work, he may have to call on teammates for help, and interrupt the iteration from within. Or the external situation may demand a faster response than one person can provide.

If this is a consistent pattern, the team is attempting to plan iterations that are longer than the predictable period in their company’s operations. And that’s too long! The solution may be shorter iterations. Most agile organizations I’ve worked with have attempted to hold their iterations to two or three weeks long, but a week might be the length of time that the outside world is willing to hold off its demands for immediate attention.

Even a one-week iteration, as short as that may seem to someone in a conventional software development cycle, might be too long. Mishkin Berteig writes about three day iterations, in a situation in which constant customer feedback was shaping the product. In one case study of an organization with severe trouble setting priorities, he counseled iterations of two days.

Two day iterations may seem like shock therapy. Two uninterrupted days, if adhered to according to plan, might be more productive than a week of second-guessing and wondering.

Wednesday, August 5, 2009

Learning from Each Other

Here’s what I got to do yesterday: have a conversation with a fellow professional in which we shared our enthusiasm about what we do, and both learned things from each other. Brainstorming about how to approach a testing problem led us to interesting and valuable conversations with each other. It was a thoroughly enjoyable morning.

And that experience seems to be what Corey Haines has day by day on his Journeyman Tours. In his blog, On Being a Journeyman Software Craftsman, he shares those conversations. I’m writing this post to draw your attention to his interview with JB Rainsberger on the evolution from Test First Programming to Test Driven Design. [TDD – which I’ve seen spelled either Test Driven Design or Test Driven Development – is a topic I expect to address more in the future.] As a side note, JB Rainsberger wrote the tremendously useful volume, JUnit Recipes, a book that earned a spot on my desk when I was testing XML in a Java test harness.

Travelling around, talking to people whose professional work is something you’d like to learn from. How cool is that? Inspired by Corey Haines, I’ll try to carve out time for those learning opportunities.

Tuesday, August 4, 2009

The Interrupted Iteration

In descriptions of agile software methodology, the stakeholders agree on what will be accomplished during the iteration during iteration planning. Once the team is off and running on execution, any new ideas or new input will wait until a subsequent iteration. But once a product is in the hands of users, that’s only an approximation of reality. The sudden and unplanned needs of the customer in the wild can, and often do, upset our plans.

There are a number of strategies for dealing with the sudden customer emergency: First, we have the option to stop the clock on the iteration. Freeze operations where they are, take the team off to deal with the emergency, then come back and complete the iteration as intended. This might involve slipping the end date of the iteration, but if the iteration’s tasks had a unified concept and conclusion, it preserves the integrity of the original plan.

Second, there’s the option to replan. Add new tasks to cover the customer issues that have arisen and drop lower priority ones. If a release were already planned for the end of the iteration, this might be the most efficient way to roll out an emergency patch.

Another approach to the interrupted iteration is to cancel the iteration. The team puts pencils down on the current work. The closer to agile best practices of frequent check-ins of regression-free code, the less work will be lost for the future. The next step may be to do some work which is outside the iteration framework. In any case when the team returns to the original work, they'll replan with new information.

While customer emergencies can and sometimes must be worked around, if escalating issues that interrupt the iteration become the norm then something’s structurally out of whack. Some avenues to readjust will be in the second part of this entry.

Monday, August 3, 2009

Staying Unstuck

Just as I was posting Getting Unstuck, another blogger I know was reflecting about “getting out of a rut”. For this writer, the interesting direction was not generating new ideas, but establishing new behaviors. As anyone who’s ever tried to change their own behavior knows, establishing a new habit takes much more that just identifying what the new practice should be.

Things that help to establish new routines can include the following:

  • Have a buddy. I like to learn how to do something, such as use a new tool, and then find someone on my team to teach it to. Teaching someone else to do something builds understanding. And when I forget to use my new tool, I now have someone who will remind me.
  • Make it part of the routine. Redefine the process to depend on the new behavior. For example, peer review before check-in. Someone can see that a rule is being violated. That works best with a new behavior we want to put on the critical path for product release.
  • Make the desired behavior visible. The example that comes to mind on this is documentation of test processes and the like that’s internal to the team. When I added a “what’s been documented this week?” section to weekly status reports, it was a lot easier to get documentation tasks done.
  • Match rewards with the desired behavior. What we reward we do, even if the only reward is recognition. But sometimes there’s a disincentive to the new behavior that has to be addressed.
  • Understand why. Have an answer to the question, why are we doing this? And assess – is it working? Knowing that a change brings us closer to a desired goal is a powerful motivator.