Tester Interdependence

 “Send three and four pence, we’re going to a dance”. -apocryphal message received from the front lines during WWI.

Tester independence is commonly cited as an essential principal in software testing. It serves two purposes:

  • Avoid conflicts of interest. We all have our own interests. In business, that can equate to a lot of money. If I have software developed for me by a vendor, perhaps I don’t want to rely on them telling me that it works just fine. Perhaps I want it independently tested. 
  • Avoid author bias. When you develop something, you are down in the detail. It can be difficult to see the wood for the trees. You can stare blindly at a problem that is flashing neon red right in front of you. Development is hard, and everyone can benefit from an extra set of eyes.

The former requires some degree of management separation so that the focus of testing is not biased against the interests of the customer, and that the findings of testing have an appropriate forum. The latter can be served simply by having someone other than the developer doing the testing. Therefore, any given context will require a different degree of independence.

This principle is often applied with little regard for context. For example when joining a team as the new test manager, I set about meeting all the testers, trying to form picture of the team. One conversation stood out: 

Me: “So how regularly do you speak to the developers?”

Tester: “Er, well, hmm, we’re not actually allowed to talk to the developers…”

This was tester independence run riot. Our context was supporting in house development projects yet someone had taken the independence mantra to the extreme and set up the team as a black box:

  • Inputs: requirements, executable code
  • Outputs: test reports, bug reports.

This kind of set up might be appropriate to contract development or accepting third party software, but for this team it was just plain nuts. Why? Because there was no conflict of interest, there was a common interest: build working software. The degree of independence practiced was overkill.

What’s the problem? Surely if independence is good, then more independence is better? Like anything, tester independence involves tradeoffs. In many industries, a common approach to avoiding conflicts of interest is to erect “Chinese walls”; information barriers between parties. Instead of flowing freely, information is directed through formalized channels. This creates the opportunity for error and ambiguity, and removes the informal channels that might allow for clarification. 

Rather than slavish adherence to the principle of tester independence, I like to use a balancing principle: tester interdependence. This principle is simple: Projects need the information testers can provide. To provide the right information, testers need information from the project. 

Why is this important? As testers, we trade information; it’s our bread and butter. Starve us of information and we’ll test the wrong things, in the wrong ways, at the wrong times: the information we provide will be at best irrelevant, at worst damaging.

This topic came up last year whilst discussing Agile with a colleague. He put forward the view that the “principle” of tester independence is violated when the developer and tester work too closely together: that this eliminates tester objectivity and thereby reduces the tester’s value in combating author bias. This argument is based on the belief that a tester is objective. But testers are subjects not objects. When did our job titles bestow us with the superhuman power of objectivity? I prefer to acknowledge that all the stakeholders on a project are subjective, have their own perceptions as to what is important, and their own understanding as to what is being implemented.

We can model these different perspectives in a Venn diagram. Let’s keep it simple and consider only three stakeholders: a user, a developer and a tester.

 

 

 

 

 

 

Whilst the three parties agree on some things (the intersection of all three sets), there are significant differences in perspective. This gives rise to the possibility of bugs, false positives, missing features, unneeded features and irrelevant testing. Practicing excessive tester independence helps to perpetuate this situation.

In contrast, if we apply the principal of tester interdependence (and by extension the interdependence of all stakeholders), then we apply practices that narrow the gap. Reviews are a good example: I’m not talking about sterile document review / sign-off here, but interactive conversations aimed at resolving misunderstanding and ambiguity. The result might look something like this:

 

 

 

 

 

 

Though applying the principle of interdependence we can converge the perspectives of project stakeholders and by doing so reduce the opportunity for error.

By all means give thought to how independent you need your testers to be, but balance that with interdependence instead of simply locking them away. Chinese walls breed Chinese whispers: “Send reinforcements, we’re going to advance“.

2 thoughts on “Tester Interdependence”

  1. I worked in a commercial software team where all communications between, dev, test, product management, and support had to first be sent up to the VP level, where the respective VPs (at the remote corporate HQ) would discuss it before sending it back down to the people in our building it was intended for. We weren’t allowed to talk to sales at all. As you might imagine, very little got communicated in this way, and I was constantly criticized for actually going to someone’s cubicle to ask a question. They are not there any more.

Leave a Reply

Your email address will not be published. Required fields are marked *