Throw it Away

Test automation is software. It is often more complex than the solution it is testing. Investments in automation should be justified by an ROI analysis. Automation should be based on design patterns that ensure maintainability, and its development should be subject to the same controls as any software development project. Targets should be set for automation, and progress reported. Blah, blah, blah blah blah.

Such arguments are common: and can even make a degree of sense for certain forms of automation (perhaps: I have grave concerns about script targets, automated or otherwise, and see my comments on ROI here). They also represent a phenomenally constrained way of thinking about automation and how it can help testers: the view that automation is only really useful for regression testing.

Don’t get me wrong, I have often found regression automation useful, for example:

  • The SMEs shock troops were the exploratory vanguard of the testing team, ripping apart each new release, iteration after iteration. As the bugs were fixed and the features stabilized, the toolsmith would build a series of automated checks on the more complex rules, or the more business critical transactions. Our explorers had scant need to return to old battlefields and could focus on scouting out new territory.

Sadly, for each example such as the above I’ve encountered many that are more along these lines:

  • The ROI looked great. The framework was a work of art and the automation a breeze. The UI and business rules were pretty stable, and we didn’t find many regression bugs. Then (and totally unpredicted) the business decided on a significant overhaul of the UI. The tests all broke, taking our hard work with it. The automation had worked fine in the absence of any regression risk, but when regression risk was introduced the automation was toast. What was the point exactly?
  • Management was totally bought into automation. After months of painstaking design and development a framework was established. Scripting targets were set and met. Much backslapping ensued. The testers, their numbers having been cut “because we’re doing automation now”, seemed more stressed than ever. Critical regression bugs were being missed, and strangely no one would answer my question “exactly what has been automated and why?”

Now, let’s escape the narrow view that automation is a synonym for regression testing. Thinking back on those times where I’ve I have found automation to be the most useful, it strikes me that it had very little to do with regression testing at all. For example:

  • There were too many tests to contemplate. The explorers had been fought to a standstill, and were facing the prospect of many weeks of mind numbing grind working through a thousand combinations of data. Our risk analysis suggested a failure on any of the combinations would wind up in the newspapers. Enter the toolsmith. A day’s worth of development, a data driver and a spreadsheet later, the explorers could move on.
  • A one off data migration, never to be used again. Any inaccuracy could cost a fortune in incorrect pricing, yet the millions of rows could not be reconciled by hand, or even using commonly available diff tools. Two days worth of fiddling with a consumer-grade database application, and we had a custom data reconciliation tool with which to go to town on the migration.
  • For no apparent reason, the enterprise web app kept falling over in production. Load? Nope? Any errors in the logs? Nope. Just a gradual degradation in performance followed by collapse. An admittedly inelegant framework cobbled together in Java, a handful of commonly occurring transaction driven by Selenium, random data and a few hours of execution soon revealed resource leaks in the session handler.

What do these have in common? In each case, the tool helped us to TEST. These tools had nothing to do with hitting arbitrary targets. They were not driven by a desire for cheap testing or notions of efficiency. These tools helped us do something we couldn’t otherwise test, find bugs that would have otherwise remained hidden, or achieve levels of coverage that would have been inconceivable without tooling. These tools were built to solve a particular problem, and with the problem solved, the tools were disposable. With an investment of mere days, hours even, no ROI analysis was required. With no expectation of reuse, maintenance was a non-issue: automation patterns need not apply.

What conclusions can we draw from these examples?

  • Automation exists to solve a testing problem and if it doesn’t do that, then it doesn’t deserve to exist. Other goals should not be allowed to interfere.
  • Automation is not about replacing human testing effort: it is about doing more, doing faster, or doing the impossible.
  • Automation need not be big, complex, or expensive. Some of the best automation is none of these, and is pretty ugly to boot. And when a tool’s job is done, it’s okay to throw it away.

 

Devil in the Detail IV

Previously:

A continued discussion about the arguments for detailed test scripts…

Argument 4: “We need scripts so that we can automate someday”.

Not so very long ago, before the transit strike in Halifax, I used to take the bus to work. I knew that I’d be buying a car at some point, but was looking for the right deal. Perhaps, knowing that I’d soon be purchasing a car, I should have bought myself some wheels, and paid for them out of my transit budget. Maybe even a spare and some winter tires. Of course, I didn’t need wheels to ride the bus, but I could have pretended that this was a cost of using transit so as to make the car look cheaper…

Testing does not require detailed step by step instructions. Automation does. Why hide automation’s true cost? Why? Because often testers and managers want to automate. It is seen as a universal good, or as a way to develop markatable skills. Full disclosure as to its costs might derail the proposed automation initiative.

And in many cases, rightly so, for the argument above mistakenly assumes that automated tests are equivalent to those conducted by humans. Not so. Even some poor soul who has been enslaved to scripted checks and daily test case execution targets has some chance (remote though it might sound) of doing something slightly different, noticing something a little bit odd. Automation does not.

Yet automation stands to be a powerful extension to what testers can accomplish. Such automation cannot be achieved by simply translating human actions into automated ones: rather, it is achieved through understanding what types of actions a machine is better at than us. Now, why would you think that a test script would tell you that?

Putting a Price on Testing, Part 3

In the preceding posts (here and here) I took a look at some of the pricing models that are commonly used for testing services. These posts were far from exhaustive: I avoided models that are fairly unique (e.g. bug based pricing) or that have not yet made the jump to testing from other types of outsourced service (e.g. outcome based pricing1).

Where does this leave us?

Many of these models place efficiency over effectiveness: a pointless exercise. Yes, in testing there is a relationship between efficiency and effectiveness in that loss of efficiency represents an opportunity cost in terms of coverage, but coverage is not the only aspect of effectiveness. Testing the things that matter is one. Getting inside the heads of our stakeholders so that we recognize problems when we see them is another. As is sharing the information we unearth in a manner that can be understood. The perverse incentives of pricing to encourage efficiency can represent a threat to these things: I can iterate VERY efficiently through entirely the wrong tests and in doing so provide you with a testing service that is anything but effective. And as we have seen, some pricing models can encourage cost efficiency, and not necessarily testing efficiency. There’s much to be said for NOT shooting at the wrong target.

The purest, simplest, and mercifully most common pricing model is still Time and Materials: you get paid for the effort you put in. Of all the models, this brings the least risk of incentives that distort behaviour, and if the effectiveness of your testing is important to you and your customers, then I’d encourage you to stick with this.

I’d also encourage you to get creative and consider inventing new models: but for goodness sakes, please engage brain before doing so. Perhaps there is room to adapt “output” based pricing to make it a little less fixed-price-like. Perhaps the future is “session based pricing” or some other mechanism that hasn’t been invented yet. Perhaps this topic will be made redundant by the eventual death of outsourced testing (I suspect this will not happen, though I believe that such services will – and must – change beyond recognition2). I don’t know. But what I do know is that we won’t know until we’ve tested it.

Notes

1OK, I can’t resist a short rant. Outcome based pricing is a model where the vendor only gets paid if the project delivers the benefits that it is expected to deliver. I have seen no evidence that this has yet been applied to a testing service, but as it is becoming something of a fashion in certain circles, I’m sure it is only a matter of time before some cretin attempts to apply it to testing services. On the surface, it might appear to make sense: to those who think that testers assure the value of software. Testers however are not Quality Assurance. This has been covered more than adequately elsewhere: if this is news to you, please exercise your Google muscles.

2More, I think, to come.