Models of Automation

Why do many people completely miss obvious opportunities to automate? This question has been bothering me for years. If you watch carefully, you’ll see this all around you: people, who by rights have deep and expansive experience of test automation, unable to see the pot of gold lying right in front of them. Instead, they charge off hither and thither, searching for nuggets or ROI-panning in the dust.

Last year I witnessed a conversation, which whilst frustrating at the time, suggested an answer to this puzzle. One of my teams had set about creating a set of tools to help us to test an ETL process. We would use these tools to mine test data and evaluate it against a coverage model (as a prelude to data conditioning), for results prediction and for reconciliation. Our client asked that we hook up with their internal test framework team in order to investigate how we might integrate our toolset with their test management tool. The resulting conversation resembled first contact, and without the benefit of an interpreter:

Framework team: So how do you run the tests?

Test team: Well, first we mine the data, and then we condition it, run the parallel implementation to predict results, execute the real ETL, do a reconciliation and evaluate the resulting mismatches for possible bugs.

Framework team: No, I mean, how do you run the automated tests?

Test team: Well, there aren’t any automated tests per se, but we have tools that we’ve automated.

Framework team: What tests do those tools run?

Test team: They don’t really. We run the tests, the tools provide information…

Framework team: So…how do you choose which test to run and how do you start them?

Test team: We never run one test at a time, we run thousands of tests all at the same time, and we trigger the various stages with different tools.

Framework team: Urghh?

Human beings are inveterate modelers: we represent the world through mental constructs that organize our beliefs, knowledge and experiences. It is from the perspective of these models that we draw our understanding of the world. And when people with disjoint mental models interact, the result is often confusion.

So, how did the framework team see automation?

  • We automate whole tests.

And the test team?

  • We automate tasks. Those tasks can be parts of tests.

What we had here wasn’t just a failure to communicate, but a mismatch in the way these teams perceived the world. Quite simply, these two teams had no conceptual framework in common; they had little basis for a conversation.

Now, such models are invaluable, we could not function without them: they provide cues as to where to look, what to take note of, what to consider important. The cognitive load of processing every detail provided by our senses would paralyze us. The price is that we do not perceive the world directly; rather, we do so through the lens of our models. And it is easy to miss things that do not fit those models. This is exactly like test design: the mechanism is the same. When we rely on a single test design technique, which after all is only a model used to enumerate specific types of tests, we will only tend to find bugs of a single class: we will be totally blind to other types of bug. When we use a single model to identify opportunities to automate, we will only find opportunities of a single class, and be totally blind to opportunities that don’t fit that particular mold.

Let’s look at the model being used by the framework team again. This is the more traditional view of automation. It’s in the name: test automation. The automation of tests. If you’re not automating tests it’s not test automation, right? For it to be test automation, then the entire test must be automated. This is a fundamental assumption that underpins the way many people think about automation. You can see it at work behind many common practices:

  • The measurement, and targeting, of the proportion of tests that have been automated. Replacement (and I use the term loosely) of tests performed by humans by tests performed by machines. The unit of measure is the test, the whole test and nothing but the test.
  • The selection of tests for automation by using Return on Investment analysis. Which tests, when automated, offer the greatest return on the cost of automation? Which tests should we consider? The emphasis is on evaluating tests for automation, not on evaluating what could be usefully automated.
  • Seeking to automate all tests. Not only must whole tests be automated, every last one should be.

This mental model, this belief that we automate at the test level, may have its uses. It may guide us to see cases where we might automate simple checks of results vs. expectations for given inputs. It is blind to cases that are more sophisticated. It is blind to opportunities where parts of a test might be automated, and other parts may not be. But many tests are exactly like this! Applying a different model of what constitutes a test, and applying this in the context of testing on a specific project, can provide insight into where automation may prove useful. So, when looking for opportunities for automation, I’ve turned to this model for aid1:

  • Analyze. Can we use tools to analyze the testing problem? Can it help us extract meaningful themes from specifications, to enumerate interesting or useful tests from a model, to understand coverage?
  • Install and Configure. Can we use tools to install or configure the software? To rapidly switch between states?  To generate, condition and manage data?
  • Drive. Can we use tools to stimulate the software? To simulate input or interactions with other components, systems?
  • Predict. Can we use tools to predict what the software might do under certain conditions?
  • Observe. Can we use tools to observe the behavior of the software? To capture data or messages? To monitor timing
  • Reconcile. Can we use tools to reconcile our observations and predictions? To help draw attention to mismatches between reality and expectations?
  • Evaluate. Can we use tools to help us make sense of our observations? To aggregate, analyze or visualize results such that we might notice patterns?

Of course, this model is flawed too. Just as the whole test / replacement paradigm misses certain types of automation, this model most likely has its own blind spots. Thankfully, Human beings are inveterate modelers, and there is nothing to stop you from creating your own and hopping frequently from one to another. Please do: we would all benefit from trying out a richer set of models.

1Derived from a model by James Bach that describes roles of test techniques (documented in BBST Foundations).

24 thoughts on “Models of Automation”

  1. Nice Post. Incidentally, around 11 years back when I was working on my first testing project, I automated a specific task (A small shell script which would get key information about the system under test and put it in a file. This txt file could be printed and act as an Oracle for the information present on the Browser).

    We did not have tools / expertise / knowledge to automate the front-end, but automation around a small task (In the complete test) was an extremely important step as it opened the door of tremendous possibilities.

    1. Ooh, I like that. I must admit, constrained my own thinking in terms of the mnemonic by attempting to preserve a rough sense of sequence (though they need not necessarily be sequential) through the activities.

      -Iain

  2. Good points. I would add that the initial model should be taken from the goal, not from the task. If your goal is to automate tests for test automation’s sake, you will miss many opportunities. If your goal is to find as many defects as you can as quickly as possible, you will find additional opportunities. If your goal is to help your team produce quality software in the first place, the approach changes significantly.

    1. Bill,

      Good point. Yes, this post focuses on “what”. Don’t even get me started on the “why”…the goal often seems to be compliance with some notional concept as to what constitutes “good” automation or a “good” way of automating, with little sense of the /value/ to the testing or the project. But I wonder how can automation ever be effective if NOT subservient to the testing mission?

      -Iain

  3. Nice post Iain!
    Also important to consider when going through the mnemonic are the cases where automation should be avoided for that particular item. For example, when I was testing DSL performance for Alcatel-Lucent we had some semi-automated tests (our term for automation of some test steps and not the whole test) that gathered a lot of data – primarily doing the “Drive” and “Observe” portion of the test. The “Evaluation” portion would have been too difficult to automate – so NOT automating the “Evaluation” was an intentional decision. We left the evaluation to the testers due to the complexity of determining whether or not there was an issue.

    Great read. I’m going to send it to a test manager from a different product line who is from the “automate everything” mindset.

    1. Cheers Paul…you’re reading my mind. I’m slowly working up a set of heuristics (in the form of indications and contraindications), but want to re-read The Shape of Actions first.

      -Iain

  4. The goal of finding as many bugs as possible as fast as possible isn’t the same, and isn’t necessarily compatible with the goal of producing fewer bugs in the first place. Testing has a place in both, but it is a different place. In the case of finding bugs it makes sense to have an independent testing team that approaches its task as if it is separate from, but consistent with, the mission of writing the code. In the case of making better software they do not have a separate task that is independent of the development team. Developers and testers are both trying to create bug free software and each must work with the other to find the best way to achieve the goal.

  5. Excellent post. I’m printing this to add to my black binder of test ideas. The other problem I run into, is that the test automation ‘team’ is often so understaffed, out manned, that their efforts can often be pushed to automate ‘that’ while requirements role on and they miss, present, important cases that could be automated, and tested right now before they impact the customer.

    1. Thanks Timothy. I wonder if there might be any relationship between staffing levels, priorities and the mental models of the management making those decisions?

      -Iain

  6. We made a lot of small steps too, to come to an automated environment in the rail business. Having systems as interlockings we had to simulate the field elements, we had to trigger the field elements when trains drove over those elements, we had to create train dynamics that trigger those elements and finally we had to made those things replayable. On top we have a list with scenarios that can be run. We were able to do all those steps, because we had an idea of an architecture and how to put those small things together.

    1. Chris, that sounds like a fascinating project. I often find that architectural awareness is critical in the work that I do.
      -Iain

  7. Excellent discussion Iain. There seem to be several “layers” of blinders when it comes to automation. One of the most significant, as you point out, is related to the differentiation between automating a test case, and using automation as a test tool. I would also say that one of the biggest misconceptions about automation is that it is neccesarily expensive. In this age of readily available programmable objects, a small amount of automation can be leveraged for a disproprotionatly large amount of information. In this case the discussion does not have to center around saving long term costs after scaling a significant initial expenditure (which is often how automation is modelled).

  8. Very nice blog enjoyed reading. Automation team role should be to be automate anything that project(not only testers) team finds doing cumbersome manually and that could help to do things faster. It can accommodated in any part from development to deployment or database migration or report generation can be few examples

  9. Good points in this post. I believe test automation takes the form of leveraging tools, whether by automating whole tests or portions of tests, to increase the efficiency of testing per release. After all, shorter testing cycles is the desire from management in most organizations. One thing I would add to your model is “Estimated Maintenance Cost” or “Tool Complexity” for the tools/scripts that are built. These are another set of questions which should be asked to determine whether its worth automating as it is directly impacted by release schedules and resource constraints.

Leave a Reply

Your email address will not be published. Required fields are marked *