In domain testing, we divide possible tests into classes that we deem to be equivalent – i.e. groups of tests that we expect will cause the software to do the same thing, or give us a similar result.
However, equivalence is a matter of perspective:
Variable Perspective. Consider testing the login function an application. Logging in with user “Jane” and a valid password is equivalent to logging in with user “John” and a valid password – these tests both give us the same results (the user logs in successfully). This only holds from the perspective of the variables userID, password and loginAllowed. If other variables (such as time limits to access, user role etc) are also considered, these tests may give very different results. We need to be careful to understand what variables we are dealing with, and what results we are observing.
Failure Perspective. Not all equivalent tests are created equally, some have more power in relation to potential failures than others. Boundary values are a classic example. Imagine a numeric variable that can be set to a value from 0 to 9. Tests with values 0 to 9 all exist within an equivalence class, but values 0 and 9 have more power in relation to boundary failures than values 2 to 8. This non-equivalence in relation to failures is the essence of best representative testing, and we can leverage this to select the tests that give us the best chance of exposing the failures we are looking for.
When performing a domain analysis, it is useful to remember than tests that are equivalent from one perspective will be non-equivalent from another.
One of the techniques we often apply in testing is boundary value analysis. The reason for using this technique is simple – that things often go wrong on or near boundaries, for example because an incorrect operator was used (e.g. “less than” instead of “less than or equal”).
We generally determine where and what the boundaries are from reading the spec; these are explicit boundaries, ones that were designed into the solution.
There is another class of boundary however: the implicit boundary. These exist because of the way that computers store and represent numbers.
In the “real” world, there is no limit to the value of a number; one could have any value from negative to positive infinity. But computers are limited by memory, and the number of bits used to store numbers.
These introduce implicit boundaries, minimum and maximum values that variables can hold.
Here’s the boundaries that correspond with 16-bit, 32-bit and 64-bit integers (short, int and long data types in Java):
- Java Short, 16 Bit signed
- lower boundary: -32,769 | -32,768
- upper boundary: 32,767 | 32,768
- Java Int, 32 Bit signed
- lower boundary: −2,147,483,649 | −2,147,483,648
- upper boundary: 2,147,483,647 | 2,147,483,648
- Java Long, 64 Bit signed
- lower boundary: −9,223,372,036,854,775,809 | −9,223,372,036,854,775,808
- upper boundary: 9,223,372,036,854,775,807 | 9,223,372,036,854,775,808
So how would you use this in testing? Much in the same way you’d use any explicit boundaries.
Let’s look at a simple example. Imagine you are testing an app that features a simple numerical input field that can accept whole numbers only. Even if no information is available about explicit boundaries, you know that at least 2 (min and max) implicit boundaries must exist. You can therefore try testing with the “common” boundary values listed above. If validation has not been implemented in such a way that the data you enter is constrained to within the range supported by the underlying variable, you may well be able to force a failure.
This needn’t just apply to testing via the GUI. Imagine you are testing across multiple applications. If system A sends data to system B, and different data types have been used (e.g. a long integer in system A, and an int in system B) then by entering data on A that exceeds the top boundary for 32-bit integers you may be able reveal unexpected behavior in the interface or on system B.
If explicit boundaries are unknown during testing, try hitting some of the implicit ones. You might just find a bug.
Uncertainty permeates software testing. What will the software do? What should it do? What could go wrong? Which of the infinite range of tests should we select? How long will it take us? When will we know when we’re done?
As testers, our very mission is driven by uncertainty: it is our role to draw back the curtain and see what lies beyond.
Welcome to Exploring Uncertainty.