The Long Way Round

At high school, I learned remarkably little: or to be more precise, I remember very little of what I was taught. As best as I can recall, my teachers took me for a whirlwind tour of their syllabi. Facts flew by, each another faceless town; there was never any opportunity to stop and look around. Taking computer science as an example, whilst I struggle to remember a single thing we did in class the lessons that I learned whilst writing keyloggers and DoSing the Vax have stuck with me to this day.

As an undergraduate, I was…well, let’s just say that I was a little distracted. I remember pieces of the syllabus, but debt and hunger taught me the most important lesson of those years: they taught me a work ethic. As a postgraduate, things were different. The opportunity costs of taking time out from work were too great, so I studied for my masters in my own time. Many facts continued to pass me by, sticking long enough to write an exam, but each term I’d learn new ways of looking at a subject, and find things that resonated with my daily working experiences. I learned about ways of solving problems that I could put into practice immediately. I developed skills. I began to learn how to learn: I realized that facts don’t stick too well, but that skills and experiences do. And in an age where a dazzling array of facts is just a few clicks away, what is the value of memorizing an encyclopedia?

When I started testing, I muddled through for a while until a hideously embarrassing interview showed me the extent of my own ignorance. I resolved to learn more and applied what I’d learned as a postgrad. I read, I studied, and I followed many interesting diversions. I applied anything and everything I could on the projects I was testing on. I learned SQL when I needed to. I taught myself Java when I needed some quick and dirty automation. I sought opportunities to teach when I realized the power that this has to deepen my own understanding. I also learned that the toughest most painful projects are invariably those that provide the greatest opportunities for growth.

This week, I was fortunate enough to attend Rapid Software Testing (RST) run by Michael Bolton. This is a splendid course that focuses on skills, mindset and different ways of breaking down testing problems. It does so through experiences: games and exercises that are fun and sometimes a little painful too. I merrily leapt feet first into a number of traps; and suitably humbled found both new ideas and added depth to old knowledge. Whilst there is no silver bullet when it comes to learning, I have no doubt that I will continue to draw lessons from RST for years to come. Learning is a lifelong journey, not a whistle stop tour. When there’s no destination, then short cuts make no sense. I’m enjoying taking the long way round.

A Tale of Two Conferences

2012 is an exciting year. Quite apart from having some fascinating project work to get my teeth into, I am lucky enough to be attending, and speaking, at a couple of conferences.

First up: the Professional Development Summit takes place next month Halifax. This is an all volunteer event jointly organised by local chapters of six professional non-profit societies:  the Atlantic Association for Software Quality Assurance (AASQA); the Canadian Information Processing Society (CIPS); the Canadian Association of Management Consultants (CMC); the International Institute of Business Analysis (IIBA); the IT Service Management Forum (itSMF) and the Project Management Institute (PMI).  It features a wide variety of home-grown talent. The BEST thing about PDS is its multidisciplinary nature: now, I love talking testing to testers, but it’s great to share with other professions too. I’m honoured to be supporting this event.

And in July, off to sunny California for CAST 2012. As a context driven tester, I can hardly wait. This is the event that “puts the confer back in conference”: in fact, what I look forward to the most is not subjecting people to death-by-powerpoint, but having a meaningful conversation with those attending my session. The fact that CAST is bookended by Test Coach Camp, some interesting looking tutorials and recently announced training is simply icing on the cake.

Roll on April, roll on July. Maybe I’ll see you there?

 

Ledge Psychology

Sometimes being a tester feels like being a ledge psychologist. You know? The folk who talk would-be jumpers back from the ledges of tall buildings.

I’m sure you’ve been there. Perhaps it was a project manager planning a project with barely any testing. Perhaps it was an executive balking at the cost of even a minimal strategy. Perhaps it was a product manager who wanted to ship IMMEDIATELY and not after those scary bugs got fixed. Maybe it was even a manager demanding metrics that made little sense to you.

You may feel like a ledge psychologist, but you aren’t. It’s not up to you talk the jumper down. You are an information provider, and your role is to help the jumper make an informed decision. How high is the ledge? How fast would a typical body be traveling when it reaches the ground? What are the chances of survival?

Project managers, product managers and executives are grown ups. They get to make their own decisions, are paid to do so, and ultimately will be held accountable for their mistakes

This doesn’t mean you are powerless. Even if your jumper goes over the edge, YOU can always decide not to grab on to their shirt tails and follow them over. There are always other buildings.

 

Exploring Context

Previously on Exploring Uncertainty…

Introduction

Test strategy isn’t easy. If you are facing the challenge of starting a new project and developing strategy, then I have bad news for you:

Every project is unique, there is no straightforward process for developing strategy, and much of what you will read on the subject is about test strategy documents rather than test strategy.

The good news is that developing test strategy can be fascinating and rewarding.

Test strategy can be described as consisting of two parts: situational analysis and strategic planning. These should not be confused with steps in a sequential process: you will often find yourself progressing each in parallel or jumping back and forth between the two. In this post I will focus on situational analysis.

What is Situational Analysis?

Situational analysis is, well, about analyzing your situation! It’s about understanding your context and in particular identifying those factors that need to motivate your test strategy. There are a number of resources that can help guide you as to the kinds of information you should be looking for when doing so: start with the Heuristic Test Strategy Model.

This model is useful, but without skill it is nothing. To use it well, you will need to be adept at obtaining, processing and using information. Here are some skills, techniques and a few tips that I’ve found helpful.  Some of this might be useful, some of it not: you, your context and your mileage will vary.

If you’d like to add to the list, I’d love to hear from you.

Ask lots of questions

You are a tester: information is currency. To get information you need to ask questions, and be effective at doing so:

  • Ask open questions to elicit information.
  • Learn to love Kipling’s “honest serving men”.
  • Ask closed questions to test your understanding.
  • Encourage elaboration with non-verbal cues.
  • Use silence, so that others have to fill it.
  • Adapt, and follow promising leads.
  • Know when to quit when a trail goes cold.
  • Use “Five Whys” to get to causes.
  • Use models, checklists and questionnaires to remind you of things to ask. Just don’t let them limit your thinking.
  • Listen to what remains unsaid; be alert for the elephant in the room. 

Build relationships

Testing is dependent on good relationships: the better your relationships, the better the intelligence you will gather, and the easier it will be when you have to share bad news (you’re a tester after all). You need to build relationships:

  • Connect regularly, buy lunch or coffee, call “just for a chat”.
  • Develop a genuine interest in people.
  • Treat every interaction as an opportunity to learn.
  • Treat every interaction as an opportunity to build shared understanding.
  • Explore common interests and genuine differences.
  • Be prepared to keep secrets, or at least declare when you can’t.
  • Do random favours, asking nothing in return.
  • Be prepared to ask for help, and repay your debts.
  • Be honest, bluffing is for fools: never be afraid to say “I don’t know, but I can find out”.
  • Learn to relate to other disciplines, how to talk to PM, Developers…

Use diverse sources of information

Not everything useful comes in the spec, and not everything in the spec will be useful. Search far and wide for relevant information:

  • Use project documentation as a starting point not an ending point.
  • Are there user guides, manuals, online tutorials?
  • Talk to people who have used, supported or otherwise worked with the software.
  • Visit the workplace and observe how it will be used.
  • Google it.
  • Is there a website?
  • Is there a Wiki?
  • Are people talking about it on forums, Facebook or Twitter?
  • Ping your network; has anyone used anything similar?
  • Has the software, or its competitors, ever been in the traditional media?

Absorb and synthesize large sets of information

You will probably be overwhelmed. To deal with learning curves that look more like walls, you will need to be able to deal with information overload:

  • Learn to speed read.
  • Set a goal for each reading session – look for specific things.
  • Allocate some time for random facts.
  • Look for things that surprise you, that run counter to the opinions you are already forming.
  • Prioritize, but keep a list of things to come back to.
  • Look for relationships, patterns and connections.
  • Use sticky notes to cluster and rearrange facts.
  • Keep a notebook handy: you may have insights in the most unusual places.
  • Don’t just read it, model it: use mind maps, flow charts, context diagrams, decision tables etc…
  • Get others involved in the process: they will see connections you can’t.

Learn and keep learning

Every project is different, and each one will present new challenges. Rise to this challenges by building an arsenal of skills:

  • Be a magpie for every skill and technique you can find.
  • Develop project management skills it’ll help you relate to PMs.
  • You need to understand the interplay between scope, time, cost and quality.
  • You need to understand how to use assumptions instead of simply making them.
  • Learn some business analysis, coding and application architecture.
  • Become as technical as you need to be.
  • Learn to present and facilitate.
  • Become comfortable at a whiteboard.
  • Learn to read statistics, and to know when you’re being misled. Better yet learn to manipulate data for yourself.
  • Pick up lots of analysis techniques: FMEA, SWOT, force field, the list goes on.

To be concluded.

The Untempered Schism

“He stood in front of the Untempered Schism. It’s a gap in the fabric of reality through which could be seen the whole of the vortex. We stand there, eight years old, staring at the raw power of time and space, just a child. Some would be inspired. Some would run away. And some would go mad.”  Doctor Who, The Sound of Drums (2007).

That is how I felt when first understood context.

Not when I read the words “The value of any practice depends on its context”, and thought Well obviously.

Nor when I started taking context into consideration by tweaking my approach, and thought Hey, I’m context driven.

No, it was when I first took in the variety and beautiful complexity of context, when the world turned under my feet and I realized that the rabbit hole keeps on going, when context became the motivation for my testing.

How do you deal with that? The brain numbing diversity of factors you need to take into account when assessing context? The incalculable number of approaches, techniques and skills that might apply? Beat a retreat? Fall back to the comfort zone? To familiar practices and the words of a favorite textbook? To default behavior and the way we do it around here?

Once you appreciate context, these are not options.

Yet for those who haven’t been exposed to context driven testing, there is frequently ONE big idea, ONE option, ONE approach that has been considered. More often than not, this is the FIRST idea that came to mind. This isn’t context driven, it is laziness driven. At best it is driven by a lack of imagination. Some of the time this one idea hasn’t even been thought through to a point where it is even remotely defensible. In the BBST Test Design lectures, Cem Kaner identifies this as being a case of “premature closure: adopting the first solution that looks likely to work, instead of continuing to ask whether there are better solutions”.

As service providers, we have an obligation to find better ways to provide value through our testing. To test deeper, faster, cheaper, whatever our projects demand. To find solutions to testing problems. A single idea doesn’t cut it.

Exploring context isn’t easy. Nor is imagining and evaluating alternatives, nor adjusting to meet changing demands. There’s no algorithm for this, no simple set of repetitive steps that will lead us inexorably to a solution. There is only skill and judgment.

To be continued.

Even in Context

1. The value of any practice depends on its context.

2. There are good practices in context, but there are no best practices.

So say the first two principles of context driven testing. But are there best practices in context? Are there practices that represent the best or only choice in any given situation?

First we need to discuss context. What is context? That’s a pretty complex question, if only because context is not a single thing. Context is not indivisible, it consists of many factors.

When exploring context, I ask a lot of questions. The following list is far from exhaustive, and is only intended to illustrate the range and diversity of factors that make up context:

  • Who are the participants of the project?
  • Who are the sponsors of the project?
  • Who is paying for the project?
  • Who are the customers of the project?
  • Who are the customers of the testing?
  • Who will use the software being developed?
  • Who else might be affected by the software being developed?
  • Where are the various stakeholders located?
  • What different organizations or organizational units do they belong to?
  • What contractual factors are involved?
  • What political factors are involved?
  • How effectively do the various participants interact?
  • What expectations do the stakeholders have for software quality?
  • What quality characteristics matter to them?
  • What other project constraints (scope, time, cost) matter to them when weighed against quality?
  • What reporting expectations do they have?
  • What methodological, standards or regulatory expectations do they have?
  • How do expectations vary between stakeholders?
  • Which stakeholders really matter?
  • What development methodology are they using?
  • What experiences do the project team have with this methodology?
  • What is the project delivering?
  • When is it expected to deliver?
  • When do people really think it will deliver?
  • What is the release strategy? Is this a one-off or will there be multiple releases?
  • What bits of the solution be given to testing when?
  • Will those bits change often?
  • What information is available?
  • How current is it?
  • How likely is it to change?
  • How accurate is it perceived to be? Will it help or hinder you?
  • What might fail and how?
  • How likely are such failures?
  • Why would a failure matter?
  • What impact would a failure have on the stakeholders?
  • How risk averse are the stakeholders?
  • What keeps them up at night?
  • How complex is the solution?
  • What technology is being used?
  • How established is this technology?
  • How experienced are the developer in using it?
  • How testable is the solution?
  • How likely are the stakeholders to listen if you need to ask for testability enhancements?
  • Who will be doing the testing?
  • Who will be testing which bits?
  • What tools do testers have at their disposal? What do they need?
  • What skills do the testers have?
  • What weaknesses do they have?
  • What influence do the testers have? Are they credible or simply ignored?
  • Why are YOU involved?

It is highly unlikely that any practice will be a perfect fit for the context you are testing in, there are always trade offs involved. For example:

  • A practice might look like a fit in terms of the quality goals of project stakeholders, yet perform poorly in terms of time and cost constraints.
  • A practice might look like a fit in terms of time, cost and quality, but be unworkable when it comes to the capabilities of the test team.
  • A practice might fit perfectly with the capabilities of the testers, yet be wholly inappropriate for revealing useful information about the solution itself.

So the value of any practice in context cannot be measured on a single scale: it will be a blend of different strengths and weakness that play to a multitude of different contextual factors. As if this weren’t challenging enough, different stakeholders will place different values on different factors:

  • One stakeholder might value time and cost over quality.
  • One might be terrified of a lawsuit and want a LOT of testing.
  • One might have an expectation that testing standards be used, and believe that time and cost should be adjusted accordingly.

In the same way that “Quality is value to some person” (Weinberg), so the value of a practice in context is its value to some person. Quality is subjective, no less so the quality of testing practices. An individual stakeholder might perceive that a given practice is best – for them – in context, but there are no absolutes that will apply to all stakeholders.

There are no best practices, even in context.

 

Checking Is Not Evil

A few days ago, over at http://context-driven-testing.com, Cem Kaner expressed the view that manual exploratory testing had become the new Best Practice. Cem’s comments resonate strongly for me as I have recently recognized and sought to correct that in my own thinking.

Whilst on BBST Test Design last year, Cem called me out on one question where I’d answered from a position of context driven dogma rather than any kind of well thought out position. This was an invaluable lesson that made me begin to question my prejudices when selecting a test approach.

I recognized that I had begun to show a strong preference towards manual exploration over automated testing, of disconfirmatory testing over confirmatory checking. I resolved to challenge myself, and for every approach I considered, to think of at least two others so as to compare and contrast their relative merits.
 
Then came the moment of truth: a new project. And as luck would have it, one that would go to the very heart of this issue.
 
The project in question is all about data. There are many thousands of conditions that are specified detail, the data combinations expand this out to potentially billions of individual checks. I would be ignoring a significant part of my testing mission were I to disregard either checking or automation in favor of manual ET. In short, the context demands that large-scale mechanized checking form a big part of the test approach.
 
Does this mean that we will rely solely on checking and disregard exploration?
Does a check-heavy strategy make this any less context driven?
 
No, and no. As I wrote here, checking may be necessary but it is not sufficient. This project is not unique: there is plenty of opportunity for unfulfilled needs, undesired or unexpected behaviour. We will test the specifications before a single line of executable code is available. We will challenge our own understanding of the design, and seek to align the perspectives and understanding of others. We will use risk based test design and seek to reveal failures. We will accept that the specification is not perfect, and that we will need to adjust our thinking as the project evolves. We may check, we may not test manually, but the essence of what we do will remain both exploratory and context driven.
 
Checking is not evil, nor (as far as I am aware) did anyone say that it is.