Rethinking Regression, Part 1: Hard Lessons

During one of my first test management gigs, I had an unpleasant surprise.

The testing cycle in question was retesting a bunch of bug fixes, and doing regression testing of the affected modules. No other modules were affected by the changes, even indirectly.

Other than a few minor bugs, the tests passed with flying colours, and we happily pushed the build up to UAT.

About half an hour later, an irate project manager arrived at my desk: the acceptance testers had discovered a number of major problems in other parts of the application, problems that sounded hauntingly familiar.

After another hour or so of testing we came to a frightening conclusion: version control issues had caused this build to wipe out over a months worth of fixes in modules that were pretty much done.

The PM’s response: “Why didn’t you test that? You’re meant to be doing regression testing.”

I learned an important lesson that day: always do a full regression.

Unfortunately, that was entirely the wrong lesson.

Regression testing, attitudes to regression testing, and common regression testing practices cause some serious issues for testers and the projects they serve. This series of posts will explore the topic further.


Other posts in this series:

2 thoughts on “Rethinking Regression, Part 1: Hard Lessons”

  1. There are many stages and levels for regression, in general coverage should be system-wide, but depth depends on risk/expectation of failures.

    We don’t plan regression enough, we put much effort planning and mainly documenting 1st version “atomic” test cases, which are obsolete after 1 usage,
    But instead of well planned regression, we just try to reuse the ones above, which cause huge redundant efforts, and lack of end2end coverage.

    1. Kobi, thanks.

      From what I see so far, I think I probably agree with your comments. I’ll be expanding on this over the next few posts it this series, l look forward to discussing it further.


Comments are closed.