Tag Archives: Perception of Testing

L’Aquila

On April 6th 2009, an earthquake devastated L’Aquila, a town in central Italy. It killed more than three hundred people, injured thousands and displaced tens of thousands. This tragedy continues to have repercussions: last month, a group consisting of six scientists and a government official were convicted of manslaughter on the grounds that they failed to properly represent the risks of such an event.

These convictions have drawn considerable media attention, much of it contradictory, some of it misleading. One might be forgiven for believing the absurd: that these men have been found guilty of failing to predict an earthquake. The reality seems to be more complex. However, my intention with this post is simply to draw attention to lessons that this case may hold for the tester. This is neither a detailed account of the facts of the case, nor is it an opinion piece concerning the rights and wrongs of the Italian legal system. If you are seeking such things, then please Google your heart away: you’ll find plenty of accounts and opinion online.

The first thing to strike me about this case is the different opinions as to the types of information scientists should be providing about earthquake risks. In the week prior to the earthquake, these scientists met to assess the probability of an earthquake taking place, given an unusually high level of seismic activity in the preceding weeks. The scientists may have believed that providing such an assessment was the extent of their role: according to an official press release from Italy’s Department of Civil Protection, the purpose of this meeting was to provide local citizens “with all the information available to the scientific community about the seismic activity of recent weeks”. However, a key argument in the prosecution’s case was that the scientist had a legal obligation to provide a risk assessment that took into consideration factors such as population density, the age and structure of local housing etc. There is a world of difference between assessing the probability of an event and conducting risk assessment.

Does this sound familiar? Have YOU ever run into a situation where testers and their customers have conflicting views as to the role of the tester? I see these things playing out pretty much every day. Testers providing “sign off” vs. “assuring quality” vs. “providing information”: these are well known and well publicized debates that are not going to go away any time soon. Ignoring these issues and allowing such expectation gaps to persist is to court disaster: they erode and destroy relationships and it is the tester who will lose when unable to live up to the expectations of those they serve. Nor is toeing the line and trying to keep everyone happy a solution. Often testers must deliver difficult messages, and it is impossible to do so whilst playing popularity games. Imagine being the tester who has been cowed into “signing off” on a product that ends up killing someone or causing a financial loss. If you do this, then you deserve what is coming to you.

Now, whilst I strongly subscribe to the view that our role is as information providers, I have noticed something disturbing lately: testers who seem to feel that their responsibility ends with adding smiling/frowning faces to a dashboard or filing bug reports, “we reported the bug so now it’s not our problem”. This is wholly inadequate. If our role is to provide information, then handing over a report is not enough. Providing information implies not only acquiring information, but communicating it effectively. A test plan is not communication. A bug report is not communication. Even a conversation is not communication. Only when the information borne by such artifacts is absorbed and understood has communication taken place. This is neurons not paperwork. I recently had a conversation with a tester who objected to articulating the need to address automation related technical debt in a way that would be understood by project executives. Perhaps he thought this was simply self-evident. Perhaps the requests of testers should simply be accepted? Perhaps the language of testers is easily understood by executives? I disagree on all counts: testers need to be master communicators, we need to learn how to adapt our information to different mediums, but most importantly we need to learn to tailor our message to our many different audiences. Facts rarely speak for themselves; we need to give them a voice.

Another aspect of this case that I find interesting is that it seems that Bernardo De Bernardinis, the government official who has been convicted, may have had a different agenda from the scientists: to reassure the public. He had motivation to do so: not only had a local resident been making widespread, unofficial, earthquake predictions, but in 1985 Giuseppe Zamberletti, a previous head of the Department of Civil Protection, was investigated for causing panic after ordering several, and in hindsight unnecessary, evacuations in Tuscany. Before the above meeting took place he told journalists that everything was “normal” and that there was “no danger”. This advice had fatal results: many of the local inhabitants abandoned their traditional earthquake precautions in the belief that they were safe.

This is the kind of reassurance that science cannot, that scientists should not, give. It is the same with testing. Have you ever worked for a project manager, product owner or customer who simply wanted to know that everything would be okay? Of course you have, this is human nature: we crave certainty. Unfortunately, certainty is not a service we can provide. Not if we want to avoid misleading our customers. Not if we value integrity. We are not in the confidence business: we are no more Quality Reassurance than we are Quality Assurance.

Science and testing are often misunderstood, and the customers of both have needs and expectations that cannot be fulfilled by either. Scientists need to do a better job of communicating the limits of what they can do. Testers need do the same. In the L’Aquila case, the prosecution stated that the accused provided “incomplete, imprecise, and contradictory information”. Often information IS incomplete, imprecise and contradictory. Scientists and testers alike would be well advised not to hide the fact, but to frequently draw attention to it.

“Done”

“Done” causes me no end of trouble. I’ve been struggling with this for years, with questions that seem to make little sense, questions that may be familiar to you: “when will testing be done?”, “is testing done yet?”

I recently attended a presentation in which it was asserted that “testing is the culprit of many schedule delays and cost overruns” (and here are the best practices that you must apply in order to keep testing in check). This is another manifestation of the “done” problem, and is utterly dysfunctional. I’ve worked on a number of projects that took this view, projects where done was the only measure of success and testing had to be done by a given date or else was considered a failure. Of course testers were not alone in this: developers had similar dates to meet but somehow never seemed to have any problem, well, as long as you don’t count the huge swathes of missing functionality and the bugs that Mr. Magoo would have had a hard time missing. Somehow, this always seemed to reflect badly on testing’s ability to be done. One of Jerry Weinberg’s many wonderful phases (and one that permeates the Quality Software Management series) is beautifully applicable: “it’s not a crisis; it’s the end of an illusion.”

For some time, I tried to counter this with the argument that testing would never be done (in the sense of having tested everything) but had mixed results. Latterly, I’ve come to the conclusion that the done problem turns on a fundamental misunderstanding as to the nature of testing. For a project manager, who sees the world through a lens of Gantt charts and tasks to be completed, it makes complete sense to be concerned with when testing will be done. Unfortunately testing is not like that. Unlike developers we are not creating something tangible: in this way the nature of our work is far more akin to that of the project manager, and asking when testing will be done makes about as much sense as asking the PM when their management of the project will be done. Testing simply isn’t a set of tasks that need to be completed before the end of the project. It provides information that helps others to make decisions. Period.

When asked when our testing will be done, perhaps we should answer with a question of our own: when would you like us to stop investigating quality? The stopping heuristics suggested by Michael Bolton (et al.) provide a great starting point for the conversation that would inevitably follow.