I’ve written previously about the role of the tester in reducing uncertainty on software development projects: of how we model and observe, building our knowledge and providing information.
It might be tempting to imagine the tester as a perfect observer, standing aloof, measuring and judging. Sadly, such an image is delusional: uncertainty permeates everything we do, not just the software we seek to understand. We don’t stand outside the room looking in.
What are the sources of uncertainty in testing?
1) Much uncertainty is inherent in the testing challenge: the impossibility of complete testing guarantees that we can never have full knowledge of the software we test, nor is it conceivable that any set of test techniques will ever predict with certainty where all the bugs will be found.
2) We are subject to model uncertainty. As testers we construct models as to how the software should work, and how it could fail. These models are invariably flawed:
- Consider oracles, our models as to how the software should behave: every oracle is heuristic, that is to say useful but imperfect. If that were not the case, if we had complete true oracles, then these oracles would be indistinguishable from the desired state of the software under test: why would we need that software? Further, quality is subjective, relating to the needs and values of people: as such there can be no absolute and objective oracle.1
- Consider bug hypotheses, our models that describe potential failures: if these were not flawed, then we could perfectly predict each bug without running a single test.
- Consider tests themselves: each test is a model that describes how the software will behave under certain conditions. Unfortunately the range of conditions is sufficiently vast that it is easy to miss conditions that prove to be critical. The range of resulting behaviours presents a similar challenge.2
- Even models that describe testing itself are flawed, some more than others.
3) Our observations are subject to measurement uncertainty: our interactions with the software influence how it behaves. This is not limited to our selection of conditions, nor even to Heisenbugs and the probe effect of resource monitors: the very rate, frequency and sequence of our actions can drive different behaviours in software (consider race conditions and resource leaks).
4) We are subject to human error. Testers are humans too: our perceptions our limited, we can only reliably focus on so many things at once, we have unavoidable psychological biases that will influence our choice of tests, the behaviours we observe, and how we interpret our observations.
Much uncertainty is epistemic, that is to say that it can be reduced. If we are to reduce the uncertainty associated with software, we would be wise to understand the role that uncertainty plays in our own work, and seek ways in which we can reduce that too.