A few months ago, I met a colleague from a distant part of the organization.
“My biggest problem”, he said, “is a lack of consistency in test analysis. No two of my testers do it the same”.
“Really?” I enquired, “How do you mean?”
“Well, they document things differently, but that isn’t the problem…given the same specification they’ll come up with different tests. They approach things differently on each project. I need them to be doing the same things every time: I need consistency, I need consistent processes and methods.”
“Well, consistency can be important” I replied. “Will consistently crap work for you?”
“No, of course not. We need to be doing good testing.”
“So a consistent approach might be less important than consistent results?”
“Er, maybe. Doesn’t one imply the other?”
And therein lies the problem. With simple systems it’s reasonable to assume that you will always get the same output for a given set of inputs. Think of domestic light: you flick the switch and the light comes on. Flick it again and the light goes off. Even adding a little complexity by means of another staircase switch doesn’t add much in the way of complication. Now we don’t know whether switch-up or switch-down equates to on or off, but flicking the switch still toggles the state of the light.
If only software development was like this! Such simplicity is beguiling, and many of our mental models are rooted in the assumption that this is how the world works. It doesn’t of course.
Unfortunately, software projects are nothing like this. Imagine a lighting circuit with many, many switches. Imagine you don’t know how many switches there are, or indeed where they are. Imagine that you have no knowledge of the starting state of the circuit, and therefore which switches need flicking to get the lights on. Imagine that some of the switches are analogue dimmers rather than their binary cousins. Imagine that there are other actors who are also flicking switches or changing the configuration of the circuit. Imagine that those involved have different ideas as to which light should come on, or how bright it should be: the streetlight folk and the reading light people just can’t agree, and nobody’s talking to team ambience. Now imagine that you’ve been asked to assess the quality of the lighting once someone has figured out how to turn the damned thing on.
Now subject this scenario to a consistent set of inputs. Try to form some repeatable algorithm to reliably “solve” this problem. You can’t, it’s impossible, it’s an intractable problem. This feels a little more like the testing I know, though still a hopeless simplification.
“That,” I explained, “is why we need to emphasize skill over Method. Software projects are complex systems, and repeating the same inputs doesn’t work very well. Mechanical, repeatable processes are no guarantee of success. We need testers who are able to navigate that maze, and figure out what needs doing given.”
I made a few notes, wrote a few tweets, and got back to business. Thoughts about consistency however, never strayed far away. Then, earlier this week, they paid me another visit.
I’ve been moonlighting as a lecturer at Dalhousie University. At the start of the course I set the students an assignment (based on one of Kaner’s from BBST Test Design) to analyze a specification and identify a list of test ideas. This week’s lecture was on test design: for the first half of the lecture we discussed some of the better known, and more mechanistic, test design techniques. Then I asked the class to break into groups, compare and contrast their test ideas, and present on any similarities and differences.
“We had a few ideas in common,” said the first student “and a load of differences”. He went on to list them.
“You came up with different ideas?” I asked, feigning horror.
“How can that be? I’ve just spent a good part of the last 3 hours describing a set of techniques that should guarantee the same set of tests. What happened?”
“Well, um, we came up with a lot of ideas that we wouldn’t have using those techniques.”
“Good, I was hoping as much.”
“Tell me why you think there were differences. Why wouldn’t these techniques have suggested those ideas?” I asked.
“Well, I guess we used our imaginations. And we’ve all got different backgrounds, so we all thought of different things.”
“Exactly: the techniques we’ve discussed are mechanistic. They might give you consistency, but they remove imagination –and your own experiences- from the test design. Now, tell me, which would you prefer: testing based on your original list of ideas, or based on the ideas of the group as a whole? Which do you think would provide better information to the project?”
“The group’s ideas” he said without hesitation.
“Well some of us focused on specific things, the overall list is better…better-rounded. We’d miss lots of stuff with our own.”
“So this inconsistency between individuals, is it a good thing or a bad thing?”
“I think it’s good. Those techniques might be useful for some problems, but by sharing our different ideas, I think we can test better.”
“Thank you” I said, mission accomplished.