Tag Archives: Strategy

Selecting Strategies

Previously on Exploring Uncertainty…

In the previous post in this series I discussed situational analysis: one half of the test strategy equation. In this post I’m going to discuss the other side of the coin: strategic planning.

Again, there’s no formula for this. I’m not going to attempt to outline any creative process for generating strategy ideas, nor am I going to describe methods for choosing between them, because, well, you can Google just as well as I can.

Instead, this post (much like the last) will describe a mixed bag of things that I find useful.

Differentiate between long range and short term strategy decisions. Some strategies are long term investments, for example certain forms of automation. Other aspects of strategy will be in constant flux, for example test design techniques that you select during a test session. Do not confuse the two: committing ALL your strategic decisions in a monolithic document and then refusing to change is a recipe for disaster. If the plan stinks, then trying harder to deliver against that plan is insanity: CHANGE THE PLAN. Context is not static: projects unfold over time in ways that are often not predictable. Recognize this, and the need to course correct as your landscape changes.

Establish mission, goals and constraints. Test strategy ideas should be viewed through the lens of your mission. Why are you testing and what kind of information are you seeking? Will your strategy support that? What if there are multiple goals? Could they ever be in conflict? Which are more important than others? How do your strategy ideas stack up against each goal? What constraints are you operating under? Are your ideas unrealistic in light of your constraints? Are the constraints fixed, or are they negotiable?

Consider plenty of alternatives. Don’t fixate on the first idea that springs to mind. Generate lots of ideas. Get other people involved: they’re allowed to have ideas too, and some will be better than yours. Bounce ideas off one another and think of variations on each one. Consider under what contexts each strategy might be a good idea, and under what contexts it might not; then consider fit with the current context.

Test your ideas. Avoid making a psychological commitment to your ideas, and open yourself up to challenge. Each challenge will either invalidate or strengthen your ideas. You are a tester; you test things. That can include more than just software; your ideas can be tested too:

  • Prototype. Remember that strategy is not a linear process of analysis followed by design. Get your thoughts on strategy out there to be tested by your stakeholders. Manage their expectations so they know these are your “initial thoughts” and then solicit feedback. Socialize new ideas informally before trying to reach formal agreements. This process can reveal new things about your context.
  • Use simulations. Imagine a range of bugs; would your strategy catch them? What weaknesses does this expose?
  • Play “What if?” What if your assumptions fail? What if a critical risk is realized? What if a key stakeholder leaves? What if the goals change? What if the constraints change? Explore how sensitive your ideas are to changes in context. Don’t forget the role of luck. What’s the worst thing that can happen? Would your strategy fail? What’s the best thing that can happen? You’re not relying on that are you?
  • Consider scenarios. A variation of “What If?” Read “The Art of the Long View” and learn how to use scenario planning – use what you know about your context to create scenarios as to how the project might pan out. Ask yourself how your ideas perform under each scenario.

Accept that sometimes you are under time pressure. The above is all well and good, but sometimes you will need to act quickly: you can’t take five days to decide on a strategy for a test session that starts in five minutes. Recognize that perfection can be an obstacle to attaining good enough, and only invest the time that you can afford in decision making. Ask yourself what the consequences are of not making a decision by a given deadline, versus the consequences of making a bad decision. Determine what information you have available to you right now; versus new information you might have later. Can your decisions be deferred until you have more information, or do you need to act now?

Have fun. Test strategy is like juggling fifteen pineapples, a chainsaw and a porcupine: it’s a challenge, there’s a lot going on, and there’s always the possibility of a mess. But because of that challenge it can be thoroughly rewarding. Enjoy.

Exploring Context

Previously on Exploring Uncertainty…

Introduction

Test strategy isn’t easy. If you are facing the challenge of starting a new project and developing strategy, then I have bad news for you:

Every project is unique, there is no straightforward process for developing strategy, and much of what you will read on the subject is about test strategy documents rather than test strategy.

The good news is that developing test strategy can be fascinating and rewarding.

Test strategy can be described as consisting of two parts: situational analysis and strategic planning. These should not be confused with steps in a sequential process: you will often find yourself progressing each in parallel or jumping back and forth between the two. In this post I will focus on situational analysis.

What is Situational Analysis?

Situational analysis is, well, about analyzing your situation! It’s about understanding your context and in particular identifying those factors that need to motivate your test strategy. There are a number of resources that can help guide you as to the kinds of information you should be looking for when doing so: start with the Heuristic Test Strategy Model.

This model is useful, but without skill it is nothing. To use it well, you will need to be adept at obtaining, processing and using information. Here are some skills, techniques and a few tips that I’ve found helpful.  Some of this might be useful, some of it not: you, your context and your mileage will vary.

If you’d like to add to the list, I’d love to hear from you.

Ask lots of questions

You are a tester: information is currency. To get information you need to ask questions, and be effective at doing so:

  • Ask open questions to elicit information.
  • Learn to love Kipling’s “honest serving men”.
  • Ask closed questions to test your understanding.
  • Encourage elaboration with non-verbal cues.
  • Use silence, so that others have to fill it.
  • Adapt, and follow promising leads.
  • Know when to quit when a trail goes cold.
  • Use “Five Whys” to get to causes.
  • Use models, checklists and questionnaires to remind you of things to ask. Just don’t let them limit your thinking.
  • Listen to what remains unsaid; be alert for the elephant in the room. 

Build relationships

Testing is dependent on good relationships: the better your relationships, the better the intelligence you will gather, and the easier it will be when you have to share bad news (you’re a tester after all). You need to build relationships:

  • Connect regularly, buy lunch or coffee, call “just for a chat”.
  • Develop a genuine interest in people.
  • Treat every interaction as an opportunity to learn.
  • Treat every interaction as an opportunity to build shared understanding.
  • Explore common interests and genuine differences.
  • Be prepared to keep secrets, or at least declare when you can’t.
  • Do random favours, asking nothing in return.
  • Be prepared to ask for help, and repay your debts.
  • Be honest, bluffing is for fools: never be afraid to say “I don’t know, but I can find out”.
  • Learn to relate to other disciplines, how to talk to PM, Developers…

Use diverse sources of information

Not everything useful comes in the spec, and not everything in the spec will be useful. Search far and wide for relevant information:

  • Use project documentation as a starting point not an ending point.
  • Are there user guides, manuals, online tutorials?
  • Talk to people who have used, supported or otherwise worked with the software.
  • Visit the workplace and observe how it will be used.
  • Google it.
  • Is there a website?
  • Is there a Wiki?
  • Are people talking about it on forums, Facebook or Twitter?
  • Ping your network; has anyone used anything similar?
  • Has the software, or its competitors, ever been in the traditional media?

Absorb and synthesize large sets of information

You will probably be overwhelmed. To deal with learning curves that look more like walls, you will need to be able to deal with information overload:

  • Learn to speed read.
  • Set a goal for each reading session – look for specific things.
  • Allocate some time for random facts.
  • Look for things that surprise you, that run counter to the opinions you are already forming.
  • Prioritize, but keep a list of things to come back to.
  • Look for relationships, patterns and connections.
  • Use sticky notes to cluster and rearrange facts.
  • Keep a notebook handy: you may have insights in the most unusual places.
  • Don’t just read it, model it: use mind maps, flow charts, context diagrams, decision tables etc…
  • Get others involved in the process: they will see connections you can’t.

Learn and keep learning

Every project is different, and each one will present new challenges. Rise to this challenges by building an arsenal of skills:

  • Be a magpie for every skill and technique you can find.
  • Develop project management skills it’ll help you relate to PMs.
  • You need to understand the interplay between scope, time, cost and quality.
  • You need to understand how to use assumptions instead of simply making them.
  • Learn some business analysis, coding and application architecture.
  • Become as technical as you need to be.
  • Learn to present and facilitate.
  • Become comfortable at a whiteboard.
  • Learn to read statistics, and to know when you’re being misled. Better yet learn to manipulate data for yourself.
  • Pick up lots of analysis techniques: FMEA, SWOT, force field, the list goes on.

To be concluded.

The Untempered Schism

“He stood in front of the Untempered Schism. It’s a gap in the fabric of reality through which could be seen the whole of the vortex. We stand there, eight years old, staring at the raw power of time and space, just a child. Some would be inspired. Some would run away. And some would go mad.”  Doctor Who, The Sound of Drums (2007).

That is how I felt when first understood context.

Not when I read the words “The value of any practice depends on its context”, and thought Well obviously.

Nor when I started taking context into consideration by tweaking my approach, and thought Hey, I’m context driven.

No, it was when I first took in the variety and beautiful complexity of context, when the world turned under my feet and I realized that the rabbit hole keeps on going, when context became the motivation for my testing.

How do you deal with that? The brain numbing diversity of factors you need to take into account when assessing context? The incalculable number of approaches, techniques and skills that might apply? Beat a retreat? Fall back to the comfort zone? To familiar practices and the words of a favorite textbook? To default behavior and the way we do it around here?

Once you appreciate context, these are not options.

Yet for those who haven’t been exposed to context driven testing, there is frequently ONE big idea, ONE option, ONE approach that has been considered. More often than not, this is the FIRST idea that came to mind. This isn’t context driven, it is laziness driven. At best it is driven by a lack of imagination. Some of the time this one idea hasn’t even been thought through to a point where it is even remotely defensible. In the BBST Test Design lectures, Cem Kaner identifies this as being a case of “premature closure: adopting the first solution that looks likely to work, instead of continuing to ask whether there are better solutions”.

As service providers, we have an obligation to find better ways to provide value through our testing. To test deeper, faster, cheaper, whatever our projects demand. To find solutions to testing problems. A single idea doesn’t cut it.

Exploring context isn’t easy. Nor is imagining and evaluating alternatives, nor adjusting to meet changing demands. There’s no algorithm for this, no simple set of repetitive steps that will lead us inexorably to a solution. There is only skill and judgment.

To be continued.

What is a Good Test Approach?

You’ve recently started a new testing gig, and are well along the learning curve in terms of understanding your mission, the project context, and at least some of the basics of the system under test.  It’s time to begin figuring out how to achieve your mission.

Of course, there are many ways to do so.  Some will be more successful than others.  Like me, you probably have your own biases, your own set of preferences as to how to approach testing, things that have been particularly successful for you in the past.  But does that mean that they are guaranteed to be successful this time, on this project, with this software and this group of stakeholders?

Perhaps a more thoughtful way of comparing approaches would be helpful.

I had been toying with this a bit, and not really coming to any real conclusions.  Then, earlier this month, I was reading Kaner’s What Is a Good Test Case?, which provides a series of dimensions upon which the relative merits of different tests or testing techniques can be evaluated.  Perhaps a similar framework can be used to compare competing approaches?

Let’s start with defining what I mean by test approach. Firstly, I am not talking about test strategy or test plan documents, although approaches are often described in such places.  Nor am I talking about project-specific selection of testing techniques.  Instead, I am talking about the paradigms or organizing principles that will be applied to testing on a particular project.  Scripted testing, exploratory testing, and all points in between are all examples, as are test management approaches such as SBTM and RBTM, or even organisational approaches such as pairing.  It’s also important to note that these examples are not mutually exclusive: it is often useful to blend approaches so as to benefit from their different strengths.

Here are some initial ideas as to dimensions we might use for evaluating approaches: 

  • Effectiveness. How likely is the approach to achieve the testing mission?  It is more likely to do so than another?
  • Efficiency. How efficient or wasteful is the approach?  Does it make good use of all available resources or is it wasteful? What opportunity costs are incurred?
  • Confidence.  How achievable is the approach?  Is it realistic, given project constraints?  Does it depend on safe assumptions or unlikely ones?
  • Credibility.  How believable is the approach?  Will the customers of testing buy into it?  Will the testing team?
  • Compliance.  How well does the approach comply with legal, contractual, regulatory or standards based constraints?
  • Adaptability.  No plan survives contact with the enemy.  How easily can the approach be adjusted to take into account changes, emergent risks or new learning?

Some approaches will be stronger in some of these dimensions than in others.  We might also find that a particular approach is strong on one dimension for one project, but weaker on the same dimension for another project.  Take ET for example; for many projects we might consider it to be a more efficient approach than scripted testing.  Now imagine dropping into a project where there is already a reasonable level of scripting and automation available.  Dropping all this for a purely exploratory approach may not be particularly efficient.

This is a starting point.  I’ll be putting this framework to the test soon.  Let me know if you can think of any more…

Update: since posting this last night, I’ve mind mapped the evaluation model and fleshed it out a little thanks to some contributions received via twitter (thanks @oliverjf).
 
20111220-211609.jpg