Wouldn’t it be great if projects would take a “sensible“approach to mitigating regression risks? If projects applied plenty of prevention, used automated unit-level checks for confirmatory testing, and left the testers to do what they do best: find bugs.
This is not the reality on many projects. Nor is it even appropriate on every project. Not every change is sufficient to require reviews. Not every change will necessitate refactoring code. Static analysis tools can be noisy and take time to tune: not all projects will run for long enough to justify this investment. Not every project will be delivering code with a shelf-life that warrants automated unit level checks. Some projects may be having significant difficulties with their configuration management systems that require time to resolve.
“Sensible” therefore takes in a whole range of factors that a tester may not consider or even be aware of. Ultimately, it will not be the tester who determines what mitigation strategies are appropriate for the project: that it the province of the project manager.
What does this mean for the tester? In a word: mission.
It is often helpful to agree a clear testing mission with the relevant stakeholders. Doing so helps to avoid the unpleasant surprises (“You’re doing A? I thought you were doing B!”) that can result from misaligned expectations, and helps to keep the testing effort pulling in the same direction as the project.
The regression testing mission will be driven by a range of contextual factors that might include scope, scale and nature of the changes being implemented, stage within the project life-cycle, project constraints and the other mitigation strategies that the project are employing. For example:
- Project A is implementing a wide range of mitigation strategies, including configuration management and unit-level change detection. The project manager and testers agree that the testing mission should be biased towards finding bugs with only light confirmation being performed at the system level (as change detection is largely provided at the unit level).
- Project B has effective configuration management, but no automated unit level regression checks. The project manager and testers agree that the testing mission should be strike a balance between conducting confirmation around those areas that are changing, and testing for bugs.
- Project C has little regression mitigation: configuration management has proved highly unreliable and no automated unit level regression checks. Based on the nature of the changes and the stage in the project, the project manager and testers agree that the testing mission should focus on broad confirmation of the software, with some time allocated to testing for bugs.
Explicitly discussing the regression testing mission can provide the tester with an opportunity to ensure that the relevant project stakeholders are aware of the limitations of black box regression testing. However, if a project manager understands that black box regression testing is not the most cost-effective means of providing change detection and is seriously limited in its ability to find bugs – but decides to rely on it to mitigate regression risks – then that is his or her decision to make. In such a position, all that a tester can reasonably do is recognize that they are selling tobacco and provide a health warning so as to set expectations.
In summary, the regression problem is not a single problem; it is a range of different risks that are most effectively mitigated with a variety of different strategies. By educating their stakeholders about the limitations and tradeoffs involved with black box regression testing, testers can help them to make better risk mitigation decisions. Ultimately contextual factors will drive decisions as to which strategies are appropriate on any given project, and the regression testing mission needs to be defined accordingly.
Other posts in this series: