Devil in the Detail: Prologue

On my first real testing job I hadn’t a clue what I was doing, and no one around me had any testing experience either. This was when the web was in its infancy (I was still browsing with Lynx and a 2400 baud modem) and it didn’t even really occur to me that there might be books on the subject. 

I did the only sensible thing: I made it up as I went along. I can only assume that I’d subconsciously overheard people talking about testing at some point, because I did know one thing: testing needed scripts. I dutifully set about talking to the team, reviewing the design and writing a set of step by step scripts that covered the requirements. The results were not exactly outstanding, but I muddled through.

Over the next few years, I relied on scripts less: I found that they slowed me down, that I could test more productively without them. Much of the time I’d define lists of tests, or in some cases matrices describing combinations of data. These served me well. I was documenting to a level that someone with knowledge of the software could execute. Whenever I had to hand over test cases, I’d add some supplementary information and sit down and train the new testers on using the software.

On one project, one of my team told me “I prefer exploratory testing to scripts”. I wasn’t really sure what she meant. It wouldn’t be long before I found out: this was shortly after a deeply embarrassing interview had convinced me to start learning more about testing, and I was spending every spare moment reading anything I could find. 

My reaction when I first read about exploratory testing? I couldn’t understand what the fuss was about. This wasn’t anything different or special, this was just testing! Every time I’d ever played with an application to figure out how to test it, I’d been doing ET. Every time I’d prodded and poked at a program to figure out why it wasn’t quite doing what I’d expected, I’d been doing ET. Every time I’d noticed something new and tried out new tests, I’d been doing ET. I found it almost inconceivable that anyone in testing didn’t do this.

As I started managing larger testing teams, I grew to appreciate ET more. I found a surprising degree of script dependence and inflexibility amongst many testers. Encouraging the use of exploratory testing helped to correct that. 

What constantly surprises me is the range of arguments that I came across in favour of detailed “idiot scripts”, scripts that are detailed enough that any idiot could use them. This series of posts will take a look at some of those arguments.

7 thoughts on “Devil in the Detail: Prologue

  1. I’m looking forward to seeing the rest of the series.

    I’ve often said that test scripts are for idiots, but I just recently noticed that test scripts are just part of a whole category of “Documents for Dummies.” Just last week, the project boss, who is more “organizationally-oriented” than “technically-oriented”, instructed one of the developers to write a deployment document. That in of itself doesn’t seem unreasonable to me but the instruction included that it needed to be detailed and simple enough that she could comprehend the process and do it herself.

    Don’t get me wrong, she’s incredibly intelligent, but that just doesn’t make sense to me. The deployment process for the website in question is not a simple process! It uses a CMS, database, and it integrates intimately with two third-party applications. The process should only be completed by someone that knows what they’re doing and if said person is not available, it should be delayed. The document isn’t a fail-safe. Rather it’s a reference for someone that understands the process but lacks specific knowledge in its execution.

    Testing is the same. I’m happy to provide reference about what to test, things to watch out for, and how things are supposed to work but I really can’t tell you how to test because I can’t do the thinking for you.

    • Thanks. Nothing that involves brain power and skill can be readily substituted with process. This is particularly the case in IT, where our products are -in essence- ideas. Even in manufacturing such substitution can be tenuous: I don’t eat sausages (non meat eater) but if I did, I’d probably go for the gourmet butcher variety rather than mass produced variety. The butcher may have little snippets of repeatable routine, but I’m pretty sure he doesn’t have a manual.

  2. Yes, as Trevor has said… looking forward to this series.

    What you have described is similar to how it went for me. Although, in the early days the BAs wrote the scripts and ‘threw them over the fence’ to us monkey testers. After I got to a comfy level of domain knowledge I tended to just look at the test purpose and go from there. The step-by-step process was sooo painful. I could generally perform many tests in the same time it would have taken me to follows the steps and do just the one! Yes, I’m a slow reader… but seriously! Some of these bad boys were 15+ steps, each with a related (sometimes more than one) ‘expected result’.

    I would go as far to say that more of my time was spent reading the steps (and trying to understand them) than actually testing.

    There is another aspect to this, which I may ask for your advice one. We had multiple business users come in and help us test. They were not there for UAT (or similar), they were there as numbers. Numbers to help us do all the testing we needed to. So, the most common argument for the steps, was that we needed them so the business testers could do it without too many questions. Now, I for one love questions (just not the same one over and over again). But there was just ‘no time’ for questions. I haven’t thought about this much before (I think I will though – thank you for sparking it), but what would your argument against this be? It appears to be a valid point. One thing is for sure, if we didn’t have such long scripts then we would have more time to test, therefore not needing as much ‘help’.

    Anyway, I’m rambling. You may cover my question in future posts. I’ll be reading them (albeit slowly). ;0)

    • Would you have needed extra bodies if you didn’t have the overhead of scripts?
      Would your testing have been better without scripts?
      Made to walk without crutches, might the users have tested better without scripts?
      Might not asking questions have revealed some interesting and useful answers?
      What was the mission, coverage or uncoverage?

  3. Pingback: “Idiot Scripts” | Martial Tester

  4. Pingback: Five Blogs – 27 April 2012 « 5blogs

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>