“Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand.” Chinese Proverb.
In the previous post I introduced the problem of building and maintaining knowledge (primarily application knowledge) within a testing team. I then described some simple artefacts that could serve as cost-effective (and opportunity-cost effective) alternatives to “idiot-scripts”. Such artefacts may prove useful as a means to introduce new testers to a given piece of software, but are a starting point at best. The bottom line: much of what we learn, we learn by doing. To effectively build knowledge we need more than passive methods (reading a document, browsing a model, watching a video), we need ways in which new testers can become actively engaged in their own learning.
Over the last few years, I have employed a handful of different approaches to doing so. This has included setting dedicated “play time” during which the new testers freely roam the software, and secondments where testers move into a new team and gradually build up their knowledge through testing on a live project. Whilst the results have been adequate, there is room for improvement:
- In many cases testers were able to pick up the kind of information that was expected from them; in some cases this learning simply took too long.
- The use of “play time” had mixed results: some testers were able to pick up significant amounts of information about the software whereas others floundered in the absence of specific learning objectives.
- Assessment of progress was difficult, highly subjective, and not particularly granular (“Can Fred test that now?”).
As a result, I’ve started looking for ways in which this kind of learning can be enhanced such that testers:
- Are active participants in their own learning.
- Have specific learning objectives.
- Are provided with a structure that supports goal setting, reflection and feedback.
In order to address these points, I am looking at a blend of tours and session based test management (SBTM).
First, let’s discuss tours. Tours are a set of simple techniques for getting familiar with a software product: they are often used within exploratory testing to emphasise the “learning” aspect of the ET triumvirate (learning, design and execution). Michael Kelly provides a good overview of a number of different types of tour in Taking a Tour Through Test Country.
Using this approach, a tester explores an item of software in order to learn about it. Some example missions are listed below:
- Identify and list the software’s features.
- Identify and list the software’s variables, include any thoughts as to equivalence classes and boundaries.
- Identify and list the software’s data objects.
- For a given data object, create a state diagram that represents its life-cycle.
- Create a map describing how the user navigates the software.
- Identify and list any potential or claimed benefits of the software.
- Identify and list any decision rules implemented by the software, for complex sets of rules representing these as a decision tables or cause-effect graphs.
- Identify and list different ways in which the software can be configured, and the consequences of each configuration.
- Identify ways in which the software interacts with other systems with which it interfaces: draw a sequence diagram.
This is far from exhaustive, and Kelly’s article includes a number of other ideas.
In these examples, I’ve coupled learning objectives with specific deliverables such as the creation of models or inventories. I used a similar approach on a project where I was using ET with the goal of reverse engineering a product rather than testing it: in doing so I found that creating a model whilst I explored provided additional focus, helped me to keep track of my progress and helped in identifying additional avenues that might be worth investigating (at the time, I likened this to keeping a map whilst playing a 1980’s text based computer game). An added benefit in the context of learning is that such models can serve as an assessable deliverable (more on this below).
Now to SBTM: Jon Bach describes Session Based Test Management as a means to organize exploratory testing without obstructing its flexibility. This seems a reasonable fit: not only is it a process with which many testers are familiar, but it also enables goal setting, reflection and feedback within a structure that is flexible enough to adapt quickly to a tester’s individual learning needs.
The general idea is that the tester’s learning is broken into manageable sessions, each with a specific learning mission. This mission is agreed by the tester and his coach (another team member with more experience of this particular software).
With the mission established, the tester is free to tour the software or any associated material with the goal of fulfilling that mission. Whilst doing so, he constructs any agreed deliverables.
On completion of the session, the tester creates a short report that outlines what was achieved. This gives the tester an opportunity to reflect on what he has learned, how any new learning relates to his previous knowledge about the software, and what else could be investigated.
Finally, the coach and tester perform a debrief in which the session report and any deliverables are reviewed. This gives the tester an opportunity to further refine his thoughts so as to be able to articulate his learning to his coach, whilst allowing the coach to assess what the tester has learned and provide any feedback that she feels is appropriate. This debrief is also an opportunity for the coach and tester to agree potential follow-up sessions, allowing them to tailor the route through any application-specific curriculum to the needs of the individual tester.
This is a work in progress, and I’ll write more on this once I’ve tested it out further. In the meantime, if you have any thoughts or ideas on the subject – or have attempted anything similar – I’d love to hear from you.
My thanks to Becky Fiedler: who helped me to coalesce some of these thoughts, whilst adding to my reading list