Devil in the Detail I

Previously…

As a test service provider, the most frequent arguments that I hear in favour of detailed scripts concern review and approval. I’ll start with those.

Argument 1: “You need scripts so that we can review what you intend to test”.

This argument expresses two things: a desire and an assumption:

  • DESIRE: We want to understand what you plan to test.
  • ASSUMPTION: We can only do that if there are scripts.

As far as the desire goes, I love it when my stakeholders take an interest in testing. This is a GOOD thing. I want their input and I’m all for review. The more stakeholders that I can get involved in discussing testing the better:  I want to know how they intend to use the software and what keeps them up at night.

Often, this desire stems from a need to simply understand what the testers will be doing, or from a need to gain confidence that the testers are looking at the right things. Yet sometimes there can be a dangerous side to this desire. It is human nature to crave certainty in an uncertain world, and this can manifest as unrealistic expectations as to what can be planned in advance. I’m sure you’ve seen this: huge and elaborate planning efforts that deliver thousand line plans that are broken the moment the project starts. The reality of testing is that we discover and uncover as we go. To ignore what you learn and the new tests that this might suggest is to introduce unnecessary constraints on your testing. There’s value in planning, but none in attempting to hold to a plan when it is clear that it does not reflect the facts on the ground. If you encounter this expectation, and your mission involves more than simply performing a set of predefined checks, then you need to reset this expectation.

Now, to the assumption; that scripts are necessary for reviews. I’d argue the opposite; that detailed scripts are the death of a review. If you hit your reviewer with a thousand page dump of step by step test scripts from your test management tool (I call this a QC barf), expect narcolepsy rather than any meaningful input. A simple list of test cases is much more digestible and conducive to engaging stakeholders in a useful conversation. A mind map that captures test ideas (see Christin Wiedemann or Darren McMillan’s blog) might be even better when it comes to finding gaps and thinking up new ideas. Either of these approaches comes with a significantly lower price tag, and far greater flexibility, than detailed scripts.

More to follow…

 

2 thoughts on “Devil in the Detail I

  1. Pingback: Five Blogs – 28 April 2012 « 5blogs

  2. In addition to the suggestions you’ve mentioned, there are some other alternatives. One of them is a risk-to-task list–a one-to-many table that outlines risks in the left column, and testing tasks to address investigating that risk in the right column.

    —Michael B.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>