Wednesday, November 4, 2009

The Brand Pyramid

Today I attended a presentation about branding strategy that presented a pyramid like this:
I later learned that that's called the Customer-Based Brand Equity pyramid. The bottom level, the base of the pyamid, is your company's definition. The next layer represents the meaning that has for your customers, all the implied attributes and qualities that come with your brand. , next the feelings and judgements they make about that meaning, and last of all, the tiny triangle up top, represents their response -- the relationship your customers have to you. This struck me as an interesting image in more than one way.

First of all, it graphically shows how the population shrinks as you go through the marketing process. You broadcast a message, some percentage of your targets ignore it completely, some further percentage doesn't think it has anything to do with them, and so forth. Finally some tiny fraction of the initial audience takes up the call to action and converts to a customer.

Secondly, I was struck by the parallels with models of communication such as this:
If you imagine our pyramid above twisted back on itself as a kind of mobius strip, it might look like the map on the left.

"Relationship" in the pyramid above and "response" in the diagram below, both rely on interaction to take place. This diagram is like the pyramid, but with feedback.

And feedback is key here. The big public lessons about branding in the 21st century is that your brand is what the customers think it is, and corporate message is only part of that. Every twitter about your product, every website titled $your_company_sucks.net is part of the big conversation about branding. And not just your customers, but your future customers, your ex-customers, your competitors, get to join in.

It would be disingenuous for me to say that the people who gave the very short presentation I saw didn't get that point. In fact, they spoke quite a bit about external input and customer feedback as part of the branding strategy. But that brings me to my third point. Every 1000 words about public input was offset by the graphic in front of me of a pyramid -- rigid, hierarchical, one-way. Like doctors who are their own worst patients, or the cobbler's proverbial shoeless children, just because you're in the marketing department doesn't mean you're in control of your own message.


Friday, September 25, 2009

Positive Deviance

Surgeons following the trauma logs began to see, for example, a dismayingly high incidence of blinding injuries. Soldiers had been directed to wear eye protection,but they evidently found the issued goggles too ugly. As one soldier put it, "They look like something a Florida senior citizen would wear." So the military bowed to fashion and switched to cooler-looking Wiley-brand ballistic eyewear. The rate of eye injuries decrease markedly.
--- Better: A Surgeon's Notes on Performance, Atul Gawande, p. 64

Not too many books laden with statistics and medical terminology are too compelling to put down for dinner. Better: A Surgeon's Notes on Performance, by Atul Gawande, discusses the reaches and limits of medical performance -- including the very human limitations of doctors and their abilities to make good decisions. How do doctors improve outcomes for patients? An endless array of tiny choices accrue to make good medical practice a demanding art.

This book is a worthwhile read, not just because it is a quick but engrossing book. The dilemmas of how to measure and improve the performance of the practice of medicine are relevant to the headlines we see each day about heath care reform. And the truth is, doctors' performance, like most other human performances, are spread out roughly along a bell curve. Much as we'd like to believe that our doctors are, like the children in Lake Woebegone, all above average, they are not. Even as overall medical outcomes improve, this characteristic of the system holds true. Gawande's concern is to illuminate the ways in which doctors can encourage the best outcomes and thus improve their performance.

Don Berwick believes that the subtleties of high-performance medical practice can be identified and learned. But the lessons are hidden because no one knows who the high performers really are. Only if we know the results from all can we identify the positive deviants and learn from them. If we are genuinely curious about how the best achieve their results, Berwick believes, then the ideas will spread. The test of Berwick's theory is now under way. In December 2006, the Cystic Fibrosis Foundation succeeded in persuading its centers to make public their individual results ...
-- p. 226

Gawande gives advice to medical students about how to put themselves in the best position to become those positive deviants:
  1. Ask unscripted questions. Don't treat your patients or your colleagues like cogs in the machine, look for what makes each one unique.
  2. Don't complain. It drags down the morale of the team, and wears away the belief that improvement is possible.
  3. Count something. Approach the world around you through the eyes of a scientist. "If you count something you find interesting," he writes, "you will learn something interesting."
  4. Write something. Communicate. Add your observations to the world.
  5. Change. Recognize the inadequacies in what you do and be willing to try something new.
The work of a software engineer, unlike a doctor, very rarely puts a human life in the balance. The risks and responsibilities we have in society are very different. Still we have the challenge of knowing how we are doing and if we are doing the best job that we can. Attempting to become positive deviants is not a bad place to start.



Thursday, September 17, 2009

Getting Started

If you're at IMVU, the day 1 goal for a newly hired developer is to turn the crank through the entire build and release process, getting a bug fix into deployed software. (This is not a post about continuous deployment, although that's a fascinating topic in and of itself. And for those who are skeptical that such a process can coexist with a useful or valuable product, there's always Flickr.)

In organizations I have experience with, time-to-first-deployed-bug-fix is not a useful way to look at ramping up developers within the organization. Deployment is a rare event. Even if it's every three months that's rare compared to the checkout-build-bugfix-test-checkin cycle that forms the meat of the development cycle.

In organizations that don't practice continuous deployment, time to first bug fix check-in is a similar way of measuring one element of ramping up -- how long it takes a developer to be able to turn the crank for one revolution of the process. I've seen it take a day in organizations that had a well defined build process, to over a week for a product with a hairy complicated and mostly manual install that took a lot of training -- and much longer, several weeks even, in organizations that are scared to touch their code.

I'm particularly thinking about this because I'm about to be the New Guy, starting work at a new company. How long will it take me to file the first bug or write the first test? What will that tell me about the team and technology I'm going to immerse myself in?

Wednesday, August 19, 2009

Write It Down Before You Go Home

I enjoy those days when I'm working side by side with the development team, and the bugs get fixed in minutes. I like having a dashboard of test results so I can see the progress as the checkins are made and our feature gets closer and closer to done.

Usually this is before we've "officially" turned the code over to the qa team. While defects are certainly getting fixed, entering bug reports in the bug tracking would just slow us down. But. Even in this ideal situation, one thing I've learned from experience is that I should not trust my memory when the day is done. The bug I'm working on at the end of the day, the one I've just figured out how to reproduce and we don't yet have a fix for -- take a few minutes and write it down in the bug tracking system.



Tuesday, August 18, 2009

Testing Faster

Last week someone told me how to be 8 times more productive at testing software.

No joke. Jeff Sutherland in his Agile Bazaar talk called Practical Roadmap to a Great Scrum stated:

One hour of testing time on the day a new feature is finished is as productive as a day of testing time 3 weeks later.

I stopped to think about the organization that would be measuring this productivity difference. What yardstick would a high functioning agile team use to measure this difference? What would I look at for my team, I mused, to find out if this were true for us? Data collection on this seemed like quite a hurdle, but there’s a lot of information in a bugs database if you know what you are trying to find out.

In my experience, if you simply measure test plan execution time, these figures don’t make sense. But if the metric is the cycle time from the beginning of testing to completing a feature, this is a believable statement to me.

An hour while the team is fully engaged on a project can telescope to a full day of turnaround time if someone has to be interrupted from other work to fix the bug. A bug fix that would have been done quickly if a tester had pulled over a developer and said “hey look at this” might take a week if the bug sits in a queue waiting to be assigned, or for a decision to be made in a triage meeting.

The better the process efficiency, the smaller the difference between ideal time for an operation and actual elapsed time, the more a team can deliver. Keeping testing time close to feature completion gets stories all the way to done in the iteration we want to deliver them.

Wednesday, August 12, 2009

Getting Good Feedback from Users

Today’s hot topic on the internet is “What’s this new Facebook Lite application?”

It looks like FaceBook jumped the gun on asking users to evaluate their new FaceBook Lite version. It wasn’t quite ready for prime time, it seems. But that’s only given people an opportunity to speculate about the purpose of the application.

The beta announcement that went to some users looked for very open-ended feedback:

We are building a faster, simpler version of Facebook that we call Facebook Lite. It’s not finished yet and we have plenty of kinks to work out, but we would love to get your feedback on what we have built so far.

I'm sure Facebook got to recapitulate every tester's nightmare about a release that escapes too soon. But there's another interesting thought here. This message has a succinct, one sentence statement of intention here. It would be interesting to know whether this is enough direction for users to give useful feedback. Would it be more useful to ask “Do you find this interface simpler to understand?” or “Do you get faster response time?”

Classically in usability studies, we go to great lengths to observe what people do with the software without prompting – with as few clues from the observer as possible. But the skilled observer is crucial here. In a usability study, that person is in a good position to interpret any difficulties with respect to the expected behavior of the user and the system together.

Untrained users are bad at accurately self-reporting their own behavior. That's why we have to train junior testers to express their observations in terms of expected versus actual behavior. I don’t expect most users to express themselves in those terms. I’d expect the vast majority of feedback to boil down to “like/don’t like” which is of limited helpfulness. User feedback with a "like/don't like" indication and some reason why they feel that way can be quite useful though, especially when aggregated over a large set of users. The folks at Facebook will have the opportunity to collect the data about the expectations of the users and how Facebook Lite matches to their actual requirements.

Thursday, August 6, 2009

Breaking the cycle of interrupted iterations

Tuesday, I posted about handling interrupted iterations. I've experienced agile teams getting into a pattern of interrupted iterations, and in that case, a more systemic situation needs to be put in place.

In some cases, even when we don’t know what the customer issue is going to be, we are fairly certain that something is going to come up. If we have the historical data to predict a certain rate of customer issues, a workable strategy is dedicate a resource. A person on the team is dedicated to dealing with customer issues, and does not participate in the normal task signup. It’s best to rotate the customer support job among the team members so no one becomes too out of date with current product development.

Sometimes even dedicating a resource does not work. If the person in the role does not have the background to do the work, he may have to call on teammates for help, and interrupt the iteration from within. Or the external situation may demand a faster response than one person can provide.

If this is a consistent pattern, the team is attempting to plan iterations that are longer than the predictable period in their company’s operations. And that’s too long! The solution may be shorter iterations. Most agile organizations I’ve worked with have attempted to hold their iterations to two or three weeks long, but a week might be the length of time that the outside world is willing to hold off its demands for immediate attention.

Even a one-week iteration, as short as that may seem to someone in a conventional software development cycle, might be too long. Mishkin Berteig writes about three day iterations, in a situation in which constant customer feedback was shaping the product. In one case study of an organization with severe trouble setting priorities, he counseled iterations of two days.

Two day iterations may seem like shock therapy. Two uninterrupted days, if adhered to according to plan, might be more productive than a week of second-guessing and wondering.

Wednesday, August 5, 2009

Learning from Each Other

Here’s what I got to do yesterday: have a conversation with a fellow professional in which we shared our enthusiasm about what we do, and both learned things from each other. Brainstorming about how to approach a testing problem led us to interesting and valuable conversations with each other. It was a thoroughly enjoyable morning.

And that experience seems to be what Corey Haines has day by day on his Journeyman Tours. In his blog, On Being a Journeyman Software Craftsman, he shares those conversations. I’m writing this post to draw your attention to his interview with JB Rainsberger on the evolution from Test First Programming to Test Driven Design. [TDD – which I’ve seen spelled either Test Driven Design or Test Driven Development – is a topic I expect to address more in the future.] As a side note, JB Rainsberger wrote the tremendously useful volume, JUnit Recipes, a book that earned a spot on my desk when I was testing XML in a Java test harness.

Travelling around, talking to people whose professional work is something you’d like to learn from. How cool is that? Inspired by Corey Haines, I’ll try to carve out time for those learning opportunities.

Tuesday, August 4, 2009

The Interrupted Iteration

In descriptions of agile software methodology, the stakeholders agree on what will be accomplished during the iteration during iteration planning. Once the team is off and running on execution, any new ideas or new input will wait until a subsequent iteration. But once a product is in the hands of users, that’s only an approximation of reality. The sudden and unplanned needs of the customer in the wild can, and often do, upset our plans.

There are a number of strategies for dealing with the sudden customer emergency: First, we have the option to stop the clock on the iteration. Freeze operations where they are, take the team off to deal with the emergency, then come back and complete the iteration as intended. This might involve slipping the end date of the iteration, but if the iteration’s tasks had a unified concept and conclusion, it preserves the integrity of the original plan.

Second, there’s the option to replan. Add new tasks to cover the customer issues that have arisen and drop lower priority ones. If a release were already planned for the end of the iteration, this might be the most efficient way to roll out an emergency patch.

Another approach to the interrupted iteration is to cancel the iteration. The team puts pencils down on the current work. The closer to agile best practices of frequent check-ins of regression-free code, the less work will be lost for the future. The next step may be to do some work which is outside the iteration framework. In any case when the team returns to the original work, they'll replan with new information.

While customer emergencies can and sometimes must be worked around, if escalating issues that interrupt the iteration become the norm then something’s structurally out of whack. Some avenues to readjust will be in the second part of this entry.

Monday, August 3, 2009

Staying Unstuck

Just as I was posting Getting Unstuck, another blogger I know was reflecting about “getting out of a rut”. For this writer, the interesting direction was not generating new ideas, but establishing new behaviors. As anyone who’s ever tried to change their own behavior knows, establishing a new habit takes much more that just identifying what the new practice should be.

Things that help to establish new routines can include the following:

  • Have a buddy. I like to learn how to do something, such as use a new tool, and then find someone on my team to teach it to. Teaching someone else to do something builds understanding. And when I forget to use my new tool, I now have someone who will remind me.
  • Make it part of the routine. Redefine the process to depend on the new behavior. For example, peer review before check-in. Someone can see that a rule is being violated. That works best with a new behavior we want to put on the critical path for product release.
  • Make the desired behavior visible. The example that comes to mind on this is documentation of test processes and the like that’s internal to the team. When I added a “what’s been documented this week?” section to weekly status reports, it was a lot easier to get documentation tasks done.
  • Match rewards with the desired behavior. What we reward we do, even if the only reward is recognition. But sometimes there’s a disincentive to the new behavior that has to be addressed.
  • Understand why. Have an answer to the question, why are we doing this? And assess – is it working? Knowing that a change brings us closer to a desired goal is a powerful motivator.