Tag Archives: Recruiting Testers

Counting Experience

Once upon a time, I believed that only testers should test, and testing experience counted for everything.  This was an easy trap to fall into: after all, I had been around the block. I’d learned lots about testing approaches, different strategies, methodologies and techniques. Looking back at my earlier efforts, and many mistakes, it was easy to think “if only I knew then what I know now!” and ascribe any improvement to experience.

Dumb. Arrogant. Mistake.

What changed my mind?

Let me briefly describe the project: we were customizing an enterprise solution owned by Client A for use by Client B. Due to the complexity of the product, the lack of any models or documentation, and the significance of the changes, this would be no simple matter. In addition, time scales were tight and we were iterating rapidly. I chose a testing approach that relied heavily on exploratory testing, with automation support for those areas that were considered high regression risk or eminently suitable for data driving.

The exploratory testers were to be the vanguard: first to test new code, first to test fixes, first to investigate and qualify bugs reported by others. This was to be the most critical testing role on the project. For this I was given two subject matter experts seconded from Client A: testers whose sole testing experience was a little bit of UAT.

Now, there’s a common misperception that I often hear about ET: “you need a lot of testing experience to do that”. Perhaps I could have listened to this conventional wisdom. Perhaps I could have succumbed to my prejudices. After meeting the testers in question, I played a hunch and chose not to.

We got started with some basics: discussing the impossibility of complete testing, the oracle problem, bug reporting expectations and the overall approach to testing for the project. Then we got testing. To demonstrate the mechanics of SBTM, I led the first couple of test sessions, after which I dropped back to chartering and reviewing session reports. Within a few weeks I pretty much left them to charter themselves, with a daily huddle to discuss and prioritize things to test.

In the early days I monitored progress intensively:

  • When I heard them debate whether a particular behaviour was a bug, I interrupted with a brief breakout session to discuss the relationship of quality and value, and a variety of different oracles they might consider.
  • When I heard them struggling with how to test a particular feature, I’d introduce them to a few different test design techniques and heuristics that might be relevant.
  • I’d review bug reports, and provide feedback as to follow-up testing they should consider.

Pretty soon they were a wonder to behold. They quickly assimilated every idea and concept thrown at them. The bugs they identified demonstrated significant insight into the product, its purpose, and the needs of its eventual users. It was fascinating to listen to animated discussions along these lines:

Tester 1: Is this a bug?

Tester 2: Hmm, I don’t think so. It’s consistent with both the requirements and the previous version.

Tester 1: But what if [blah, blah, blah]…doesn’t that really kill the value of this other feature?

Tester 2: Got you. That could be a problem with the requirements; we need to talk to the BA.

This pair had rapidly become the most impressive testing duo I’d ever had the pleasure of working with. How had this happened? They brought with them a blend of aptitudes and experiences that far outweighed their relative lack of testing experience:

  • They had open minds, a willingness to learn, and no preconceived notions as to “the one true way” to test.
  • They exhibited an insatiable curiosity: a desire to understand what the software was doing and why.
  • Having lived with the previous version of the product, they were dedicated to delivering quality to other users like them.
  • Their experience as users meant that they had well refined internal oracles that gave them insight into what would diminish user value.

Their experience counted for a lot, but not their testing experience: any testing know-how they needed they were able to learn along the way.

I’m not claiming that testing experience counts for nothing: I’ve also worked in situations where testers needed significant experience in testing, and specific types of testing, or else be eaten alive by a sophisticated client. Testing experience is only one piece in a complicated puzzle that also includes domain experience, technology experience, attitude and aptitude.  Different situations will demand a different blend.

None of these factors are simple. Consider testing experience: this is not a straightforward, indivisible commodity. Testing is a diverse field, and highly context dependent. What works for you in your context might not work for me in mine. Often the recruitment of testers boils down to a simple count of years in testing. A tester can spend decades testing in one context, but when moved suffer from “that’s not how you test” syndrome.  Such an individual is ill equipped to learn or even consider the approaches and techniques that are relevant to a new situation. Even a testing virgin could be a better choice, if accompanied by an open mind. Diversity of experience, and a willingness to learn and adapt, count for far more than years. Counting experience is for fools.