Learning for the Long Haul, Part 2: Touring & SBTM

“Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand.” Chinese Proverb.

In the previous post I introduced the problem of building and maintaining knowledge (primarily application knowledge) within a testing team. I then described some simple artefacts that could serve as cost-effective (and opportunity-cost effective) alternatives to “idiot-scripts”. Such artefacts may prove useful as a means to introduce new testers to a given piece of software, but are a starting point at best. The bottom line: much of what we learn, we learn by doing. To effectively build knowledge we need more than passive methods (reading a document, browsing a model, watching a video), we need ways in which new testers can become actively engaged in their own learning.

Over the last few years, I have employed a handful of different approaches to doing so. This has included setting dedicated “play time” during which the new testers freely roam the software, and secondments where testers move into a new team and gradually build up their knowledge through testing on a live project. Whilst the results have been adequate, there is room for improvement:

  • In many cases testers were able to pick up the kind of information that was expected from them; in some cases this learning simply took too long.
  • The use of “play time” had mixed results: some testers were able to pick up significant amounts of information about the software whereas others floundered in the absence of specific learning objectives.
  • Assessment of progress was difficult, highly subjective, and not particularly granular (“Can Fred test that now?”).

As a result, I’ve started looking for ways in which this kind of learning can be enhanced such that testers:

  • Are active participants in their own learning.
  • Have specific learning objectives.
  • Are provided with a structure that supports goal setting, reflection and feedback.

In order to address these points, I am looking at a blend of tours and session based test management (SBTM).

First, let’s discuss tours. Tours are a set of simple techniques for getting familiar with a software product: they are often used within exploratory testing to emphasise the “learning” aspect of the ET triumvirate (learning, design and execution). Michael Kelly provides a good overview of a number of different types of tour in Taking a Tour Through Test Country.

Using this approach, a tester explores an item of software in order to learn about it. Some example missions are listed below:

  • Identify and list the software’s features.
  • Identify and list the software’s variables, include any thoughts as to equivalence classes and boundaries.
  • Identify and list the software’s data objects.
  • For a given data object, create a state diagram that represents its life-cycle.
  • Create a map describing how the user navigates the software.
  • Identify and list any potential or claimed benefits of the software.
  • Identify and list any decision rules implemented by the software, for complex sets of rules representing these as a decision tables or cause-effect graphs.
  • Identify and list different ways in which the software can be configured, and the consequences of each configuration.
  • Identify ways in which the software interacts with other systems with which it interfaces: draw a sequence diagram.

This is far from exhaustive, and Kelly’s article includes a number of other ideas.

In these examples, I’ve coupled learning objectives with specific deliverables such as the creation of models or inventories. I used a similar approach on a project where I was using ET with the goal of reverse engineering a product rather than testing it: in doing so I found that creating a model whilst I explored provided additional focus, helped me to keep track of my progress and helped in identifying additional avenues that might be worth investigating (at the time, I likened this to keeping a map whilst playing a 1980′s text based computer game). An added benefit in the context of learning is that such models can serve as an assessable deliverable (more on this below).

Now to SBTM: Jon Bach describes Session Based Test Management as a means to organize exploratory testing without obstructing its flexibility. This seems a reasonable fit: not only is it a process with which many testers are familiar, but it also enables goal setting, reflection and feedback within a structure that is flexible enough to adapt quickly to a tester’s individual learning needs.

As I am using this to structure learning rather than testing, I’ve made a few tweaks. Here’s an overview:

The general idea is that the tester’s learning is broken into manageable sessions, each with a specific learning mission. This mission is agreed by the tester and his coach (another team member with more experience of this particular software).

With the mission established, the tester is free to tour the software or any associated material with the goal of fulfilling that mission. Whilst doing so, he constructs any agreed deliverables.

On completion of the session, the tester creates a short report that outlines what was achieved. This gives the tester an opportunity to reflect on what he has learned, how any new learning relates to his previous knowledge about the software, and what else could be investigated.

Finally, the coach and tester perform a debrief in which the session report and any deliverables are reviewed. This gives the tester an opportunity to further refine his thoughts so as to be able to articulate his learning to his coach, whilst allowing the coach to assess what the tester has learned and provide any feedback that she feels is appropriate. This debrief is also an opportunity for the coach and tester to agree potential follow-up sessions, allowing them to tailor the route through any application-specific curriculum to the needs of the individual tester.

This is a work in progress, and I’ll write more on this once I’ve tested it out further. In the meantime, if you have any thoughts or ideas on the subject – or have attempted anything similar – I’d love to hear from you.

My thanks to Becky Fiedler: who helped me to coalesce some of these thoughts, whilst adding to my reading list :-)

7 thoughts on “Learning for the Long Haul, Part 2: Touring & SBTM

  1. I work at an independent software testing company. This means that every few months we have to learn to test something that we haven’t seen before. One of the things we do to make learning more rapid is what I call Assigned Pair Touring (just made that term up). It’s pretty much the normal SBTM touring idea, with an extra level of reporting in between:

    The concept is that two testers pair up, individually tour some agreed upon part of the application (as a session) and then report what they found to each other, usually showcasing the application. This has to be more thorough than a normal debrief, because the other tester has usually not toured that part of the application. Usually lots of new test ideas are generated during such a presentation. The act of having to describe part of the app to someone who has not seen it before (in effect teaching) also helps to facilitate what he learned himself.

    After both testers have shared their new knowledge, all the questions have been asked and all the new test ideas are noted down, they get debriefed together. As an added twist, we sometimes have the testers describe not their own session/area covered, but the session/area of their pair.

    It’s an interesting way to tour a new application, recommend you try it :)

    • Rasmus,

      Thanks for an interesting comment. I like the sound of your approach: there are some possible efficiencies gained in splitting the touring up, and as you say, the act of teaching another tester about what you have learned helps to deepen your own knowledge (I often find that I learn more when I teach then when I am in class listening to teacher). I might worry that the learning experience is passive for those areas that are being toured and taught by another tester, though I note that your “added twist” might help to mitigate that. What kind of approach do you take to assessment and feedback? Are you using this purely as a learning mechanism, or mixing with testing?

      –Iain

      • Keeping in mind that what I’m about to describe works in our context for our team and only some of our clients and then only for new features/products:

        Yes, the general idea of the touring is to learn about the application and inform further testing. We try not to focus on specific bugs while touring and instead focus on getting a good overview from the tour – so usually it takes the form of a feature/testability tour. Ideally, this touring should provide charters/missions for the next 3-4 testing sessions. So to anser your last question: I guess it’s mixed with testing.

        As to the areas being toured and taught by another tester: Ideally, a tester will try to keep his further testing in the general area of his tour and the other tester will try to keep to his; since they have a common understanding of their respectable areas they can identify any overlapping parts and coordinate testing of those among themselves – or even pair on those areas. These details are usually discussed during the debrief.

        Assessment and feedback are still handled like in usual session based testing (if there is such a thing as usual session based testing at all). The tours are considered sessions, complete with a session report and debrief to a test lead. Reporting to/working with your pair tester is considered part of the initial session, but I guess I’m getting too detailed about specific test-time management here.

  2. As the main goal defined above is getting acquainted with the applications main features, I would use a set of common tasks covering these areas which the trainee needs to complete.
    This way you have more control/tracking of the process, as opposed to “open” exploratory session.
    Debriefing are very important, we normally ask the trainee to explain in his own words what he have learned, and from that we can spot weak points where he might need more explanation or practice, but the main thing is that in order to explain to someone else, he must phrase it in his mind.
    Which brings me to the next level of training – having to tutor someone else (assuming you have continuous growth), which puts the trainee in a position where he have to explain an issue to newer trainee – if he finds the topic hard, they both approach a veteran.

    Kobi

    • Thanks Kobi. Turning the student into a teacher is an interesting addition. I find I learn the most about a subject when I must articulate it to others, so this is potentially quite powerful.

      –Iain

  3. As a new Tester (after a break of 5 years from being in IT) I really wish we’d had something like this arranged in my new company. I can see what a benefit it would have been to have got me up and running more quickly. I really like this idea and will definitely be adding it to my toolbox. Thanks.

    • Ellie,

      Thanks. That’s really not so unusual: my own first experience of testing was “go test that” with no guidance as to what to do, where to start or how to find out…and the net was a lot less developed back then! It was guess work and trial and error all the way.

      –Iain

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>