I’ve been noodling with this post for a while now, never quite finishing it off, and then yesterday Dorothy Graham posted Is it dangerous to measure ROI for test automation?, spurring my reply: Yes Dot, yes it is.
All too often I’ve seen ROI used in attempt to either sell or justify test automation. The reasoning goes like this: replace a bunch of manual tests with automated ones, it’ll save money. This is an argument based on a great big stinking ROLie. Automated checks are never a replacement for tests conducted by people. Tests and automated checks are not substitute goods. Machines cannot observe anything other than that which they’ve been programmed to check, their judgment is limited to the algorithms with which they have been provided, they are incapable of insight or intuitive leaps, and they can never play a hunch.
Even when used honestly, as a sanity check of investment decisions, this kind of thinking is perverse. As Dot states: “If we justify automation ONLY in terms of reduced human effort, we run the risk of implying that the tools can replace the people.” In other words, we can perpetuate the myth that testing is a commodity made up of low skilled and easily replaceable parts.
I do not believe that R.O.I. is entirely irredeemable. Such cost comparisons can make sense, but only if applied below and across tests, at the level of testing tasks. For example, earlier this year, I needed to make some tooling decisions relating to the testing of a large volume and highly complex ETL process. Rather than evaluating which tests could be replaced with automation, I instead looked at which tasks should be performed by people, and which should be performed by a machine. Here’s a thumbnail sketch of the reasoning:
- Data mining, repetitive application of rules: automate.
- Data conditioning: conceptually possible to automate but insanely expensive to do so exhaustively: let’s stick with people.
- Expected result generation, high volume repetitive application of rules: automate.
- Result reconciliation, literally billions of checks per data load: automate.
- Bug isolation and reporting, investigation and judgment required: need people.
Of course, other things factored into the decision making process (I’ll discuss that a little more at CAST 2012 and in a subsequent post), but having realized that I was beginning to lean heavily towards using large scale mechanized checking to assist with much of our testing, I wanted to carefully check my thinking. In this case, rather than seeking justification, I needed wanted to answer one simple question: Am I making idiotic use of my customer’s money? ROI served as a test of this aspect of my strategy.
Now, this is a narrow example. The parallel nature of ETL test execution lends itself to breaking out and batching the tasks that make up individual tests. For many types of testing this is impossible, and ROI useless. We need a different paradigm, a focus on value instead of replacement. This is fairly straightforward. There are many ways in which tooling can add value: it can enable testers to obtain information that would be inconceivable without tools, it can improve the accuracy and precision of information that they can access and it can enable them to provide information faster or more cost effectively. The tricky part is putting a price on that value so as to determine if a particular automation effort is a worthwhile investment. So why not simply ask? Why not simply discuss the likely value with your customer, and ask what price they would put on it? For example:
- “This part of the application has a high number of permutations and we can only scratch the surface without automation. What price would you put on being able to identify some major problems in there? What price would you put on knowing that despite our best efforts we haven’t found any major problems with it?”
- “The current regression checks take about a week to complete. What price would you put on being able to complete them within a few hours of a new build?”
- “Using flags and stopwatches, we can only scale the performance tests to around 100 testers. What price would you put on a more realistic simulation?”
This might lack the appearance of objectivity that accompanies ROI, but let’s face it; the typical ROI is so speculative and riddled with estimating error as to be laughable. What this approach provides is a quick and easy way of getting to the business of value, and focusing on what tooling can do for our customers rather than only what it will cost them.