ET: Why We Do It, an article by Petter Mattson

What follows is an article by my colleague Petter Mattson.

Petter and I recently made each other’s acquaintance after our organizations, Logica and CGI, merged.  An experienced test manager and an advocate for exploratory testing, Petter wrote this article for internal publication within Logica. Unfortunately its contents were sufficiently divergent with the official testing methodology that it was never published.  Many of the points in this piece resonated for me, and I was determined that it see the light of day.

I’d like to thank Petter, and his management at CGI in Sweden, for allowing me to publish it on Exploring Uncertainty.

-Iain

Click here for Petter’s article

 

Devil in the Detail: Prologue

On my first real testing job I hadn’t a clue what I was doing, and no one around me had any testing experience either. This was when the web was in its infancy (I was still browsing with Lynx and a 2400 baud modem) and it didn’t even really occur to me that there might be books on the subject. 

I did the only sensible thing: I made it up as I went along. I can only assume that I’d subconsciously overheard people talking about testing at some point, because I did know one thing: testing needed scripts. I dutifully set about talking to the team, reviewing the design and writing a set of step by step scripts that covered the requirements. The results were not exactly outstanding, but I muddled through.

Over the next few years, I relied on scripts less: I found that they slowed me down, that I could test more productively without them. Much of the time I’d define lists of tests, or in some cases matrices describing combinations of data. These served me well. I was documenting to a level that someone with knowledge of the software could execute. Whenever I had to hand over test cases, I’d add some supplementary information and sit down and train the new testers on using the software.

On one project, one of my team told me “I prefer exploratory testing to scripts”. I wasn’t really sure what she meant. It wouldn’t be long before I found out: this was shortly after a deeply embarrassing interview had convinced me to start learning more about testing, and I was spending every spare moment reading anything I could find. 

My reaction when I first read about exploratory testing? I couldn’t understand what the fuss was about. This wasn’t anything different or special, this was just testing! Every time I’d ever played with an application to figure out how to test it, I’d been doing ET. Every time I’d prodded and poked at a program to figure out why it wasn’t quite doing what I’d expected, I’d been doing ET. Every time I’d noticed something new and tried out new tests, I’d been doing ET. I found it almost inconceivable that anyone in testing didn’t do this.

As I started managing larger testing teams, I grew to appreciate ET more. I found a surprising degree of script dependence and inflexibility amongst many testers. Encouraging the use of exploratory testing helped to correct that. 

What constantly surprises me is the range of arguments that I came across in favour of detailed “idiot scripts”, scripts that are detailed enough that any idiot could use them. This series of posts will take a look at some of those arguments.

A Trio of WTFs

I’ve recently been taken aback by some of the things that people have said or written about exploratory testing.

First there was this blog post that says of prep time: “And you would see the exploratory test team sitting idle, because there is no software to explore.” The post continues “In a less dishonest comparison, I’m sure the exploratory team would find something useful to do”. Damn right, and I’m glad the author added that: there is a persistent myth that exploratory testing is unplanned testing. Whilst ET can perform admirably with little or no advanced planning, it can benefit from the opportunity to do so. Every time that I’ve mobilized ET on a project where I’ve had test prep time, I’ve used that time to maximum effect: exploring the context, learning from my stakeholders just what they hoped to get out of the software, absorbing every scrap of information I could get hold of about what the software was meant to do, creating models to describe its behavior, figuring out lists of what to test and how– all with the goal of having a rich a set of test ideas for when something executable arrived. Exploratory testing is not unplanned testing, it simply implies that you accept that what you learn will change the plan.

Next, there was rereading Copeland’s A Practitioner’s Guide to Software Test Design. Copeland claims that ET has no power to prevent bugs whereas scripted testing does; the notion of testing the test basis through test design. The former is true of any testing that has the testers showing up on the same day the code does, but –again- given the opportunity of prep time there is absolutely no reason why such prevention cannot be part of an exploratory approach. Asking questions, seeking clarity and building understanding all have a remarkable power to expose ambiguity and contradiction. ET does not preclude this. An exploratory approach can provide prevention.

Finally there was Rex Black’s recent webinar on test strategies (which I found interesting, but that’s a post for another day). Without explicitly referring to exploratory testing, Black positions ET as being one of several “reactive” test strategies. Now, the term reactive could be taken to have negative connotations, but let’s disregard the realm of management speak that defines the opposite of reactive as proactive and think in terms of plain English. You know? The opposites of reactive include unreactive and unresponsive. I can live with that: ET is reactive in exactly the same way that a corpse isn’t. Anyone else getting an image of moldy dead test scripts? Then there’s the idea that I will know when I am employing a “reactive strategy” because I will be applying my experience rather than a testing technique such as domain testing etc. When did techniques and experience become mutually exclusive? It seems that whenever I’ve applied formal testing techniques during a test session that I wasn’t doing ET at all…silly me. An exploratory approach may be reactive (and that is a good thing), but it does not preclude any test technique.

Testing Backwards

One of my favourite projects started off by testing backwards.

The project in question involved taking software used by one customer and customizing it for use by another. First we would define which of the existing features would be preserved, removed, and modified. Unfortunately, none of the original development team was available, nor were there any existing models, requirements or design documents. Our starting point: the source code and a little bit of domain knowledge. This was hardly a basis for having a meaningful conversation with the customer: we needed to reverse engineer the software before we could start to change it.

Testing proved to be a big part of the solution to this problem. As strange as it might seem, this project didn’t just end with testing, it started with testing.

When you test forwards you use a model. This might be a set of requirements, it might be a design, or it might be your expectations based on experience or conversations with stakeholders.  This model allows you to make predictions as to how the software will behave under certain conditions. You then execute a test with those conditions, and verify that it behaved as predicted.

In contrast, testing backwards is concerned with deriving such a model. You investigate how the software behaves under a range of conditions, gradually building an understanding of why it behaves the way it does. This is reverse engineering, determining rules from an existing system.

You might be forgiven for assuming that testing backwards is only concerned with determining how the software works rather assessing it and finding bugs, after all you need some kind of model as to how it should behave in order to determine whether it fails to do so. This is not the case: the model of the software’s behaviour is not the only model in play. When you test, you bring many models to bear:

  • Models that describe generally undesirable behaviour, for example; unmanaged exceptions shouldn’t bubble up to the UI as user unfriendly stack traces.
  • Models based on general expectations, for example; calculations should comply with mathematical rules, things that are summed should add up.
  • Models based on domain experience, for example; an order should not be processed if payment is refused.

When I first started on this project, I imagined that by testing backwards I was actually doing something unusual, but it slowly dawned on me that I had been doing this on every project I’d ever tested on:

  • Every time that I had started a new project and played with the software to figure out what it did, I’d been testing backwards.
  • Every time I’d refined tests to account for implementation details not apparent from the specification, I’d been testing backwards.
  • Every time I’d found a bug and prodded and poked so as to better understand what the software was doing, I’d been testing backwards.

I was struck by the power of testing backwards:by seeking to understand what the software did rather than simply measuring its conformance with expected results, we are better able to learn about the software. By developing the skills required to test backwards, we are better able to investigate possible issues. By freeing ourselves of the restrictions of a single model, a blinkered view that conformance to requirements alone equates to quality, we are better able to evaluate software in terms of value.

Would testing backwards serve your mission?