Dear Paul

Paul,

I’d like to thank you for your kind words with regards my recent post. I agree with your assertion that there are a number of factors at work that will influence whether a tester will notice more than a machine, and I’d love to know more about your case study.

I suspect that we are closely aligned when it comes to machine checking. One of the main benefits of making the checking/testing distinction is that it serves to highlight what is lost when one emphasizes checking at the expense of testing, or when one substitutes mechanized checks for human ones. I happened to glance at my dog-eared copy of the Test Heuristics Cheat Sheet today, and one item leapt out at me: “The narrower the view, the wider the ignorance”. Human checks have a narrower view than testing, and mechanized checks are narrower still. We need to acknowledge these tradeoffs, and manage them accordingly.

I think we need to be careful about the meanings that we give to the word “check”.  You say the usage that you have observed the most is when “talking about unmotivated, disinterested manual testers with little domain knowledge” or when “talking about machine checking”. Checking, in and of itself, is not a bad thing: rather checks are essential tools. Further, checking, or more accurately the testing activities that necessarily surround checking, are neither unskilled nor unintelligent. Not all checks are created equally: the invention, implementation and interpretation of some checks can require great skill. It is, in my opinion, an error to conflate checking and the work of “unmotivated, disinterested manual testers with little domain knowledge”. Testers who are making heavy use of checking are not necessarily neglecting their testing.

More generally, I worry about the tendency – conscious or otherwise – to use terms such as “checking testers” (not your words) as a pejorative and to connect checking with bad testing. I would agree that the inflexible, unthinking use of checks is bad. And I agree that many instances of bad testing are check-heavy and thought-light. But rather than labeling those who act in this way as “bad testers”, and stopping at the label, perhaps we should go deeper in our analysis. We do after all belong to a community that prides itself in doing just that. I like that, in your post, you do so by exploring some traits that might influence the degree to which testers will go beyond checking.

There are a multitude of reasons why testers might stop short of testing, and it seems to me that many of them are systemic. Here’s a few to consider, inspired in part by Ben Kelly’s series The Testing Dead (though far less stylish). It is neither exhaustive nor mutually exclusive:

  • The bad. Some people may actually be bad testers. It happens; I’ve met a few.
  • The uninformed. The testers who don’t know any better than to iterate through checks. It’s how they were raised as testers. Checking is all they’ve been taught; how to design checks, how to monitor the progress of checks, how to manage any mismatches that the checks might identify.
  • The oppressed. The testers who are incentivized solely on the progress of checks, or who are punished if they fail to hit their daily checking quotas. Testing is trivial after all: any idiot can do it. If you can’t hit your test case target you must be lazy.
  • The disenfranchised. Ah, the independent test group! The somebody-else’s problem group! Lock them in the lab, or better yet in a lab thousands of miles away, where they can’t bother the developers. If fed on a diet of low-bandwidth artifacts and divorced from the life, the culture of the project, is it any wonder then that their testing emphasizes the explicit and that their capacity to connect observations to value is compromised?
  • The demotivated. The testers who don’t care about their work. Perhaps they are simply uninformed nine-to-fivers, perhaps not. Perhaps they know that things can be different, cared once, but have given up: that’s one way to deaden the pain of hope unrealized. Many of the oppressed and disenfranchised might find themselves in this group one day.

Do you notice something? In many cases we can help! Perhaps we can encourage the bad to seek alternate careers. Perhaps we can help the uninformed by showing them a different way (and as you are an RST instructor, I know you are doing just that!). Perhaps we can even free the oppressed and the disenfranchised by influencing the customers of testing, the decision makers who insist on practices that run counter to their own best interests. That might take care of some of the demotivated too.

I like to think there is hope. Don’t you?

Kind regards,

Iain

9 thoughts on “Dear Paul”

  1. Dear Iain,

    Good post!
    We all have to start somewhere – even if that means helping someone who doesn’t know what (exploratory) testing is – or thinks it of something new.

    Jesper

  2. >> There are a multitude of reasons why testers might stop short of testing, and it
    seems to me that many of them are systemic.

    I don’t agree with Paul’s blog post. However, I don’t think you are completely
    correct (logically).

    Consider the case of agile development. I think there is a lack of definition of
    testing. As a result testing can be largely based on test automation, viz., mostly
    checking. I don’t think the reason for that matches any of those proposed by Paul
    or Ben Kelly’s “The Testing Dead”.

    I think *for people interested in testing*, the clear distinction of checking and
    testing, can make them more effective in testing.

    Not sure if it is going to change the rest of the world. Not sure if that should
    be a motivation.

    1. Hello,

      Thanks, I’m curious: where do you see a logical error?

      My argument is not that checking is a bad thing, or that successful testing might – on the surface at least – appear to be “mostly checking”. I’m more concerned that such checking be embedded in the context of /testing/, i.e. appropriate thought be given to the selection and design of those checks, that the testers attend to their oracles, and that the output of checks be subjected to a more thoughtful evaluation than Red-Is-Bad/Green-Is-Good.

      BTW, changing the world isn’t a motivation, though it might be a means to an ends 😉

      -Iain

  3. Hi Ian,

    I am not arguing against your thoughts on testing/checking – I think your ideas are admirable. I am questioning the reasons you gave for “why testers may stop short of testing” and the plan to help.

    Consider the case of agile developers.
    I think agile allows developers/testers to focus more on checks (the bad kind). There are systemic reasons for that which are not the same as those you listed. I think similar reasons may apply to testers who are not using agile.

    Many agile developers are strong thinkers and very motivated to create high quality software. If we can’t change their behavior, by explaining tests/checks, I don’t think we can change the behavior of other groups.

    While I think your efforts on clarifying tests/checks are great, I don’t think the clarification may necessarily change testers behavior.

    You may want to give soup to the poor, but what if the poor don’t want your soup?

    Again, I think your efforts are commendable and very useful.

  4. Then there are the hand-cuffed testers.

    Those who have been chained to report only what is in the script and nothing but the script. Unfortunately I’ve seen this all too often in my relatively short testing career. Every defect/variance/bug must be linked to a test case, while it’s possible to stretch the linkage with some creative wording it is a practice that is all too common and at times demotivating to those that care about quality products as opposed to pass/fail percentages.

Leave a Reply

Your email address will not be published. Required fields are marked *