Spec Checking and Bug Blindness

Testing is often reduced to the act of modelling specified behaviours as expected results and checking actual software behaviours against the model.

This reduction trivializes the complexity of the testing problem, and reliance on such an approach is both flawed and dangerous.

In “Software Testing: A Craftsman’s Approach”, Paul Jorgensen discusses some questions about testing through the use of Venn diagrams. This post will use a modified version of those diagrams to explore the kinds of issues that we miss if we rely solely on checking against specifications.

Rather than depicting, as Jorgensen does, the relationship between specified, implemented and tested behaviours, the diagrams used in this post recognize that there is a distinction between desired behaviour and those that are specified.  Given a universe of all possible system behaviours, and three overlapping sets containing those behaviours that are needed, specified or implemented, we might conceive of the testing problem like this:

Those behaviours that lie within the intersection of all three sets (region 5 on the above diagram) represent those behaviours that are needed, specified and implemented.

From a bug detection viewpoint, the remaining regions of these sets (i.e. 1 to 4, 6 and 7) are more interesting in that they can be grouped in such a way as to describe 4 possible classes of bug. Let’s take a look at each class in turn:

Unimplemented Specifications

The highlighted section of this diagram equates to behaviours that were specified but not implemented: either the required features were not implemented or they were implemented incorrectly such that the intended behaviour does not occur.

It is this kind of bug that the specification based checking is geared towards.

Unfulfilled Needs

In this case, the highlighted section of this diagram equates to behaviours that are needed but have not been implemented. Note that these bug classes are not mutually exclusive: a bug in region 2 can be categorized as both an unfulfilled need and an unimplemented specification bug.

This kind of bug is far more insidious than those belonging to the previous class: whilst we may catch some such bugs (those in region 2) by checking against specifications, we will miss those that relate to needs that were either not articulated or not captured when the software was specified. Finding this kind of bug requires testing that is sensitive not just to specified behaviour, but to the underlying needs of the customer.

Important: whilst some will argue that such behaviours are “out of scope” for a given project by virtue of not having been specified, building software that does not fulfill the needs of its customer is a fast route to failure.

Unexpected Behaviour

With this class, the tester’s life starts to get interesting. The highlighted section of this diagram equates to behaviours that have been implemented but were not specified: this is the realm of the unexpected bug. Occasionally, unexpected behaviour may turn into an unexpected boon (region 4): behaviour that was not specified but is actually desired (perhaps the developer had an insight into real needs and implemented something without it being in the spec).

Other than through the intervention of dumb luck, specification based checking will miss many bugs in this class. Some will be apparent where an unspecified behaviour is substituted for a specified one; however this class also includes pretty much anything that could fail. Testing for this kind of bug requires the creativity to imagine possible failures and the skill to craft tests that will determine whether or not they they can occur.

Undesired Behaviour

The highlighted section of this diagram equates to behaviours that have been implemented but were neither needed nor desired: for example, “gold-plated” features or behaviours that were specified incorrectly.

Much like the previous class, specification based checking will miss many of these bugs. It will also be completely blind to behaviours that were specified but are not desired. Like the previous class, testing for this kind of bug requires imagination and skill. It also requires an understanding of customer needs that is sufficient to identify potential issues regardless of whether specified or not.


Specification based checking is a good fit for only one of the four classes of bug discussed here.  In the other three cases, the power of such an approach is seriously limited. Whilst such an approach may be necessary, it is insufficient if the testing mission is the discovery of bugs: an excessive reliance on it will inevitably result in important bugs being missed.

Testing for many types of bugs requires a more investigative approach: an approach that brings the skill, creativity and knowledge of the tester into play.

9 thoughts on “Spec Checking and Bug Blindness”

  1. Really enjoyed this post Iain, thanks for taking the time to write it.

    It has helped reframe my understanding of bugs & given me a language to communicate bugs to different members of both the Development & Customer Teams.


    1. Duncan. Thanks, and you are very welcome. My primary motivation for blogging has rapidly become modelling these things so as to improve my own understanding and ability to communicate them. That others find these posts useful is an added benefit.


  2. I’ve been struggling to articulate some of my concerns with “seat of the pants” user acceptance testing by end users, and why work flow and confirmatory tests aren’t enough.

    I’m not usually a fan of venn diagrams, but in this case I think that they helped to explain the different areas of focus for testing quite nicely. UAT seems to be really good at catching unfulfilled needs or undesired behavior in implemented needs. It often seems to miss everything else: either things aren’t tested for, or aren’t recognized as ‘issues’ that can be filed as bugs.

    This post has given me a lot to consider about my approach to UAT and how to derive value from it.

    1. Thanks, glad it was useful. UAT, or user testing in general, /can/ be good for catching unfulfilled needs or undesired behavior, though I often see it being constrained by (often unjustified) concerns about scope and time to simply looking for “unimplemented specifications” – behavior that was specified but is not present. This can be a wasted opportunity to use the insights that users can bring.

  3. This was exactly the kick in the butt that I needed. I’m off to my project manager right away to show him why this approach is no good, and that I need time for some of the good oldfashioned exploratory testing. Thanks for good input and inspiration!


    1. Henke, thanks, but tread softly. Grandstanding has its place, but sometimes other strategies are more effective.


Leave a Reply

Your email address will not be published. Required fields are marked *