Tag Archives: Philosophy

Exploring Uncertainty

This evening I read an interesting post on QAHiccupps: A Gradual Decline Into Disorder. In addition to introducing the idea of entropy in testing, it also links to a number of other posts on the subject. I’ll touch on two of those here:

In information theory, entropy is a measure of uncertainty.  As the inspiration for the name of this blog, it is a subject I’ve been meaning write about.

In most instances, the primary role of testing is to reduce uncertainty. When software is initially created, little is known about how it will behave: there is uncertainty. To address this, testers create models that describe how we believe the software will work, and how it might fail. We then conduct tests that will either support or disprove the assertions of these models. In the latter case we have obtained information that can be used to improve our models, leading us to further experimentation. Iterations of this process allow us to create better models, better approximations of the behavior of the software being tested.

In the post referenced above, Morley suggests that such modeling increases uncertainty because we’ve reduced (or abstracted) the system under test to a model, throwing away information in the process. I disagree: when we start out with virgin code, we have little information to throw away. Further, all we ever really have are models, not perfect information. Even after many iterations of testing our ideas about how the system functions are not absolute truth (courtesy of the impossibility of complete testing), they are just better models than we started out with.

These are the same principles that dominate the philosophy of science, indeed the process of refining models (hypotheses) through testing (experimentation) is something that science and testing have in common.

There is one important difference however: in science the observations of a scientist do not change the laws of nature, the ways in which the universe behaves. In software testing the observations of a tester frequently change the way in which software behaves – through the fixes to bugs. As Whittaker points out, testers increase uncertainty through this mechanism: any change shifts software into a new and unexplored state. The old models may no longer apply, and bugs beget bugs.

If it is our role to reduce uncertainty, them we need to consider both sides of this equation: how can we approach testing such that we reduce uncertainty more than we increase it?

Many thanks to James at QAHiccupps for triggering this post.