Tag Archives: Ethics

User Acceptance Tricks?

Some time ago, I was an SAP consultant. Between projects I configured a variety of demonstration systems for use in presales. These were pretty rough and ready, but they did the job. The trick (and I mean that literally) was to carefully define detailed step by step scripts, test them, and make sure that the demonstrator followed them to the letter. These provided safe pathways; routes through the application that were free of problems. A demonstrator would stray from the path at their peril; the demo would quickly fall apart if they did.

This is analogous to some User Acceptance Testing practices that I’ve observed. Do you recognize this?

The blinkered scripting scam: Acceptance tests will be scripted in advance. They will be traced to requirements. The scripts will be reviewed and pretested by the project. If all the tests pass when executed by the acceptance team, then the software will be accepted.

From a project management perspective this would seem to make sense:

  • It gives the project an opportunity to check that the acceptance team is only testing behavior that is considered to be within the projects scope.
  • It gives the project an opportunity to make sure that acceptance tests will pass before they are formally executed.
  • It helps to ensure that the acceptance team can begin execution just as soon as the software is ready for them.

This is not testing, nor is it even meaningful checking: pretesting ensures that acceptance test execution will not reveal any new information. This is demonstration, and nothing more. It has consequences:

  • Execution is often the first opportunity that acceptance testers have to get their hands on the software. With little or no opportunity to interact with the software in advance, just how insightful will their preplanned tests be?
  • Bugs don’t neatly line up along the course charted by tests. Nor do they conveniently congregate around requirements or other abstractions of system behavior just waiting to be found. Confirmatory requirements based testing will miss all manner of problems.
  • Pretesting creates safe pathways through the software. If acceptance testing is confined to these tests it can result in an acceptance decision regardless of the hazards that may lurk beyond these paths.
  • Acceptance testers, be they customers, users or their representatives have the potential to bring important insights to testing. They have a different perspective, one that is often centered on the value that the software could bring. This opportunity is wasted if they are made to follow a process that blinds them.

Acceptance testing is often differentiated from other forms of testing in terms of its purpose: whilst earlier tests are focused on finding problems, acceptance testing is sometimes positioned as a confidence building exercise. The risk is that acceptance testing becomes a confidence trick.

The good news is that this risk can be mitigated, even whilst checking many of the boxes that will satisfy project management. A couple of years ago I found myself in an unusual position; working for a vendor yet managing all testing, including acceptance by the customer. This presented a potential conflict of interest that I was determined to avoid. The contract was fixed price and payment was tied to a specific delivery date so the project manager wanted to adopt practices similar to those described above. Fortunately, he also accepted that doing so risked imposing constraints on the quality of acceptance and was willing to entertain alternatives. We agreed on the following:

  • Domain and application experts would provide training to the acceptance team prior to testing, and would be on hand to provide coaching throughout.
  • User Acceptance Demonstrations would serve to provide basic verification of requirements.
  • This would be supplemented by exploratory testing, which would allow the acceptance testers to kick the tires and bring their own perspectives to bear in a more meaningful way that the scripts alone would allow.
  • A transparent and visibly fair triage process would be implemented, which would allow the customer to put forth their own prioritization of bugs whilst allowing the project management to intervene should bugs be reported that were beyond the scope of the project.

Project management had the control they needed over scope. The customer was able to get a good feel for the software and the value it would provide. We were able to identify a number of important bugs that would otherwise have escaped us and become warrantee issues. With a little bit of thought, we managed to put the testing back into acceptance testing. Which are you, tester or grifter?

Selling Tobacco

I recently watched a presentation that Lee Copeland gave in 2007: The Nine Forgettings, which touches on a number of things that he feels that testers often forget.

One thing in particular jumped out at me: “forgetting the boundaries”. In this section, Copeland discusses the problems that arise when testers consistently compensate for unacceptable behavior by other project members – such as BAs writing poor requirements, developers handing over code that isn’t unit tested, and PMs who call for insane hours.

I can relate to this, having frequently witnessed the kind of codependent behaviour that Copeland is talking about: testers who shrug and say “that’s just the way it is” are testers who have given up thinking about how things could be better for their customer, the project. Perhaps there are some lines that testers need to draw, some things that we need to push back on.

This left me trying to square a circle.

I also support the context driven view that as testers we provide a service to the project, that we need to adapt our testing to suit the context within which we operate, and that we should do the best testing that we can with what we are given.

So how do I reconcile these seemingly conflicting views?

Here’s a few heuristics that help me:

Selling tobacco: Sometimes other members of the project will ask us to do something that we disagree with, that we believe will harm the effectiveness of our testing; our customers are asking us to sell them something that we don’t feel is in their interests. However, our customers are responsible adults, and are entitled to make their own decisions. Like selling tobacco, it is appropriate to give a health warning, then make the sale.

Selling crack: Sometimes (and hopefully rarely) we are asked to do something that is simply unethical – such as suppressing information or providing dishonest reports. Just say “no” to drugs.

Selling miracle cures: Last, but by no means least, sometimes we are asked to do the impossible – “ten days testing by this afternoon?”. Agreeing to unrealistic expectations is a recipe for disappointment. A grown up conversation about alternatives is called for.

So, what have you been asked to sell today?

Update; since writing this, I’ve been rereading The Seven Basic Principles of the Context-Driven School. The heuristics above map well to part of Kaner and Bach’s commentary:

Context-driven testing has no room for this advocacy. Testers get what they get, and skilled context-driven testers must know how to cope with what comes their way. Of course, we can and should explain tradeoffs to people, make it clear what makes us more efficient and more effective, but ultimately, we see testing as a service to stakeholders who make the broader project management decisions.

  • Yes, of course, some demands are unreasonable and we should refuse them, such as demands that the tester falsify records, make false claims about the product or the testing, or work unreasonable hours. But this doesn’t mean that every stakeholder request is unreasonable, even some that we don’t like.
  • And yes, of course, some demands are absurd because they call for the impossible, such as assessing conformance of a product with contractually-specified characteristics without access to the contract or its specifications. But this doesn’t mean that every stakeholder request that we don’t like is absurd, or impossible.
  • And yes, of course, if our task is to assess conformance of the product with its specification, we need a specification. But that doesn’t mean we always need specifications or that it is always appropriate (or even usually appropriate) for us to insist on receiving them.

There are always constraints. Some of them are practical, others ethical. But within those constraints, we start from the project’s needs, not from our process preferences.