Choose, Reinvent, Create

On first being exposed to context driven testing my initial reaction was indifference: I saw it as a simple statement of the obvious and couldn’t believe that anyone would approach testing in any other way. In some ways I had been lucky in my upbringing as a tester: most of my early years were spent testing solo or leading small teams. In each case I had a mandate to test, yet the means of doing so were left to me. Whenever I faced a new project I would first seek to understand the testing problem and then figure out how to solve it. In short, I learned about testing by figuring it out for myself. I’ll return to that thought later.

A few years ago, I moved into a different role. What I encountered took me by surprise:  processes proceeding inexorably from activity to activity with scant regard for need, a test manager who was so set in his ways that he refused to even talk about change, evaluation of testers by counting how many test cases they were able to perform each day. I had become exposed to factory testing for the first time, and with it came best practices: the term thrown about as marketing-speak devoid of any real meaning, used to justify lazy decisions where no thought had been applied, even uttered as if it were an invocation of the Divine; an attempt to win arguments by appealing to something packaged as self-evident and inarguable. Confronted with this insanity, I became committed to context driven testing and hardened my views on best practice.

On the subject of best practices, there is an interesting debate going on over on Huib Schoots’ blog contrasting TMap NEXT with Context Driven Testing. I’m not going to spend time on the debate itself: go read Huib’s blog. Rather, I‘ll address the central claim of TMap: that it is “an approach that can be applied in all test situations” (my emphasis). I disagree. It is a fundamental error to believe that any single approach can encapsulate the entirety of testing. Testing is about discovery and for that one requires creativity and imagination. These things simply cannot be captured in a single methodology: there is no one process for discovery. Let’s consider some examples from an earlier age, the Age of Sail. Captain Cook was, for his second voyage, commissioned to find evidence of Terra Australis Incognita, the counterweight continent thought to occupy much of the Southern Hemisphere. Cook would have made a fantastic tester. His test strategy? Circumnavigate the Southern Ocean at high latitudes and hope to hit land. How did he decide that? Do you think that he had a process model? Or that he was following some approved methodology? No, this test was inspiration pure and simple. Imagine that you are an explorer seeking new lands. What would your search strategy be? Would you sail a methodical zigzag hoping to hit land? Such an approach might work when playing Civilization, but in the real world time and trade winds would not be in your favor. No, the great explorers played hunches; they followed conjecture, myth and fable.  As they travelled they took cues from what they found, changing course to investigate distant flocks of birds, driftwood or other signs of land. Even failure was the source of great discovery: Columbus was spectacularly wrong in his estimates as to the circumference of the Earth, yet his failure to reach the East Indies resulted in the discovery of an entire continent. There are many sources of discovery, many triggers for our imagination. And imagination is best served by freeing the mind rather than corralling it with process. It may well be that some TMap testers can test effectively in any test situation: but not because of any methodology, because they are skilled and creative human beings who have sufficient common sense to break the rules.

Now, when I say that there is no one process for discovery, I do not mean to imply that process doesn’t matter. The great explorers had plenty of processes: for rigging the sails, taking a bearing, computing latitude. Captain Cook was notorious for running a tight ship, with strict diets for his crew and a cleanliness regime for crew and ship alike. Similarly, testers can apply many processes as and when the need arises. Nor is dismissing best practice the same as dismissing the practices themselves. Yes, some such practices are absurd, but others are useful in the right context. My reaction to standards and methodologies is simple: I ask myself is there something that I can steal? Testers would be well served by imitating the magpie and developing a habit of acquiring and hording every shiny thing they can find. Nor should they limit their scavenging to testing lore. The greater the range of tools and tricks we have at our disposal the better we are able to serve the needs of the contexts that we test within. As Daniel Kahneman (Thinking, Fast and Slow) puts it: “expertise in a domain is not a single skill but rather a large collection of mini-skills”. But is having a large toolbox enough? Even with the skill to wield each tool? No. What if you encounter a context for which you have no tool?

This is my other concern with packaged methodologies: by definition they contain only a finite range of tools. If the tester is constrained to using tools from a given toolbox, what happens when a new and different context is encountered? One in which no existing method is precise fit? I have routinely encountered such situations, but because I learned about testing by figuring it out for myself, I have rarely been constrained in my choices. In some cases I reinvented existing methods: such as blending UI automation, high volume test automation and random data generation to flush out troublesome reliability problems. In other cases I invented methods from scratch: such as a means of diagramming and deriving tests from the logic of retail price changes. One of my current teams is receiving significant praise because we invented an automated means of rapidly checking large volumes of data transformations. Embarrassingly, at one point I was also convinced that I’d invented domain testing, but that’s another story.

If we are to discover useful information on our projects, why limit ourselves to the test ideas suggested by a narrow set of techniques? Many projects develop myths about how the software operates, why not test those? Why not play a hunch? Even a test that doesn’t identify a problem can suggest further ideas that lead to a major discovery. Failure is powerful source of test design! If we are to excel as testers, why limit ourselves to the reach afforded us by existing methods? Why not invent new ways of testing, methods that leverage our human gifts or enhance our senses, tools that simulate difficult conditions or simply allow us to do more?

To serve our projects effectively, we must place context first.  Only by doing so can we understand need. Only through understanding need can we determine method. If we approach the testing problem from the other direction, and tie ourselves to particular methods, then we choose to place artificial limits on the value of our testing. Testers need the freedom not only to choose methods, but to reinvent or create methods as context demands.

2 thoughts on “Choose, Reinvent, Create”

  1. As is commonly known, Albert Einstein, one of the world’s most celebrated thinkers always maintained that imagination was the key to discovery, and that knowledge was secondary. In fact during some of his most trying moments when he felt the constraints of the analytical and conscious mind, he would withdraw to his music in search of a different type of thinking; a mode that dipped into a flowing river of ideas and possibilities. As one of his sons’s testified later, he often found the answers he was pursuing through this seemingly non-mathematical means.

Leave a Reply

Your email address will not be published. Required fields are marked *