Tag Archives: The Business of Testing

Putting a Price on Testing, Part 3

In the preceding posts (here and here) I took a look at some of the pricing models that are commonly used for testing services. These posts were far from exhaustive: I avoided models that are fairly unique (e.g. bug based pricing) or that have not yet made the jump to testing from other types of outsourced service (e.g. outcome based pricing1).

Where does this leave us?

Many of these models place efficiency over effectiveness: a pointless exercise. Yes, in testing there is a relationship between efficiency and effectiveness in that loss of efficiency represents an opportunity cost in terms of coverage, but coverage is not the only aspect of effectiveness. Testing the things that matter is one. Getting inside the heads of our stakeholders so that we recognize problems when we see them is another. As is sharing the information we unearth in a manner that can be understood. The perverse incentives of pricing to encourage efficiency can represent a threat to these things: I can iterate VERY efficiently through entirely the wrong tests and in doing so provide you with a testing service that is anything but effective. And as we have seen, some pricing models can encourage cost efficiency, and not necessarily testing efficiency. There’s much to be said for NOT shooting at the wrong target.

The purest, simplest, and mercifully most common pricing model is still Time and Materials: you get paid for the effort you put in. Of all the models, this brings the least risk of incentives that distort behaviour, and if the effectiveness of your testing is important to you and your customers, then I’d encourage you to stick with this.

I’d also encourage you to get creative and consider inventing new models: but for goodness sakes, please engage brain before doing so. Perhaps there is room to adapt “output” based pricing to make it a little less fixed-price-like. Perhaps the future is “session based pricing” or some other mechanism that hasn’t been invented yet. Perhaps this topic will be made redundant by the eventual death of outsourced testing (I suspect this will not happen, though I believe that such services will – and must – change beyond recognition2). I don’t know. But what I do know is that we won’t know until we’ve tested it.

Notes

1OK, I can’t resist a short rant. Outcome based pricing is a model where the vendor only gets paid if the project delivers the benefits that it is expected to deliver. I have seen no evidence that this has yet been applied to a testing service, but as it is becoming something of a fashion in certain circles, I’m sure it is only a matter of time before some cretin attempts to apply it to testing services. On the surface, it might appear to make sense: to those who think that testers assure the value of software. Testers however are not Quality Assurance. This has been covered more than adequately elsewhere: if this is news to you, please exercise your Google muscles.

2More, I think, to come.

Putting a Price on Testing, Part 2

In the previous post I discussed two pricing models that are generally used for contracting testing services on a project by project basis. This post will shift the emphasis to pricing models that are more commonly used for ongoing (multiple project, multiple release) testing services.

At its most basic, fixed capacity pricing means that the vendor provides a given number of testers for a set price. Such contracts are often longer term than single projects and can span years. This gives the vendor a degree of predictability when it comes to revenue and enables them to provide services at a discount versus time and materials (T&M) rates.

As testing services are provided for a set price, this model is sometimes erroneously referred to as “fixed price”.  It should not be confused with the fixed price model described in Part 1: under fixed capacity there is no defined output that the vendor must deliver. Think of it as an aggregated form of time and materials pricing where testers are contracted en masse rather than by the hour.

Remember that I introduced a general rule of vendor performance in Part 1?

In the absence of other factors most people try to do their best in the hope of getting more work.

Unfortunately, there is one factor that this pricing arrangement does share with fixed price: a perverse incentive that can conflict with the above rule. As prices are associated with the service as a whole, rather than the rates of certain individuals, the vendor has an incentive to increase margins by reducing costs. The most obvious candidate for doing so is “juniorization”, the replacement of expensive testers with cheaper ones, either by substituting testers with less skilled ones or by moving work to a lower cost location, introducing organizational complexity, an increase in the number of handoffs, and the challenges of working across time zones and cultures. Such labor arbitrage ignores the skilled nature of testing, and can have a negative impact on the value of testing. I discussed this further here.

Now onto Output Based Pricing. First, let’s clear something up: “output based pricing” is a misnomer. Please find the marketing people who came up with this phrase and introduce them to the false advertising legislation appropriate to their jurisdiction. The output of testing is information and I only know of one way to price that: on the open market. Dear client, imagine the scene: the CIO of your primary competitor is sitting in a smoke filled bar enjoying a quiet drink, when a hooded stranger sidles up to her and says: “Hey, you wanna buy a bug report?1. Whilst I am sure there is demand for Knight Capital’s bug reports right now, I can’t see a market forming anytime soon. So-called “output based pricing” has nothing to do with the outputs of testing.

Here’s how it works: deliverables, such as test strategies, test plans, documented test cases, and activities such as test case or test cycles are assigned a number of points (called variously “quality points” or “story points” – spot the bandwagon jumping). Each point has a value that translates into a typical amount of effort and therefore a price. When a client requests testing for a project, or for the release of an application, the vendor estimates the total set of testing deliverables and activities that will be required and the associated points are added up to determine the price that will be charged. In this way, output based pricing resembles fixed price: it provides a contractual mechanism for running multiple projects/releases on a fixed price basis. It therefore exposes both client and vendor to many of the risks that I described in Part 1 as being associated with fixed price contracts.

Instead of calling this model as output based pricing, it would be more accurate to describe it “artifact based pricing” or “activity based pricing”. Here’s the killer: conventional wisdom has it that this pricing model drives vendor efficiency by focusing them on their outputs! This is a tall order (and a tall tale) given that it has nothing to do with output and everything to do with artifacts and activities. Worse, by focusing on the appearance of testing it can serve to distract from the real work of investigation, discovery and communication.

To be concluded.

 

1 My thanks to Michael Bolton for this line.

Putting a Price on Testing, Part 1

The business of testing can be fascinating: I’ve recently been grappling with some of the claims concerning different pricing models that one can apply to testing services. This short series of post explores a number of common models, reflecting my current thinking on each one.

Let’s start with the simplest and most common arrangement: Time and Materials (T&M). The vendor supplies testers at an hourly or daily rate and the client pays for the amount of testing effort that they need. Conventional wisdom suggests that this is an undesirable arrangement: pricing on the basis of inputs (i.e. effort) provides no incentive for the vendor to improve efficiency, resulting in unnecessarily high costs for the client.

I disagree, and propose that the following rule applies:

In the absence of other factors most people will try to do their best in the hope of getting more work.

With this rule in mind, a vendor has plenty of incentive to make improvements: these are a relationship building investment that can help to secure repeat business. Later, I will argue that alternative billing arrangements can sometimes interfere with this rule, but more of that to come.

Fixed Price contracts are a popular alternative to T&M: the vendor provides testing services to the client at a fixed price, generally set on a project by project basis. The client pays this price regardless of the testing effort actually expended on the project. This means that testing is not the only service that the vendor is providing: they are also accepting payment for the transfer of risks from the client. Fixed price contracts are often negotiated on the basis of T&M pricing, plus a premium to account for the risk. There’s a name for this: it’s called insurance. I’m sure that most vendors don’t realize that they’re in the insurance business, and I’ve certainly never seen any evidence of insurance expertise being applied to pricing. Many executives will tell you that the risks associated with a portfolio of such projects “net out” at a level that will be covered by the risk premium. This assumes that project delays are normally distributed with significant project delays being highly improbable. Unfortunately this kind of catastrophic delay can, and does, occur. Ever seen one? I’m guessing that you have. I’ve seen several, including one project that overran by two years at a cost of millions to the vendor. Projects that go this far off the rails can wipe out the profitability of all but the largest portfolio. Selling testing services in this way is betting against disaster for the promise of relatively modest returns.

Fixed price contracts also add to the challenge of estimation. How do you estimate how long testing will take? Many approaches to test estimation may seem plausible, rational even, yet none offers anything more than false confidence. The bottom line is that testing will last as long as testing needs to last: testers have no control over the risk tolerance clients or the quality of code, yet these are the primary factors that determine how long a project (and therefore how long testing) will take. The estimation problem, when combined with a competitive fixed price bid, leads to conflicting incentives for the vendor. On the one hand, the vendor wants to minimize risks, which can encourage them to build healthy contingency into their estimates. On the other hand, the vendor doesn’t want to price themselves out of the game, which can encourage them to bid low. Often, project management approaches are seen to be the answer: specify in detail exactly what testing “deliverables” will be provided, document explicit assumptions and if anything changes use change requests to cover the corresponding costs. This is the testing vendor’s equivalent of the insurer’s policy exclusions. The problem of this approach is that it can be extremely damaging to relationships. Try this exercise: name one popular insurance company! Whilst I’ve never personally witnessed a vendor who “low balls on price then rapes you with CRs”, I’ve heard plenty of horror stories from clients. If you are in a fixed price contract with a client, and you have any aspiration to develop a longer term relationship – or to preserve one – then tread carefully when dealing with change requests.

There are other related dysfunctions. Perceived risk breeds control, thus fixed price contracts often describe deliverables in detail. Unfortunately, deliverables are not the testing. Remember that in the discussion of T&M pricing I introduced this rule?

In the absence of other factors most people will try to do their best in the hope of getting more work.

The desire for control introduces another factor. The vendor must now balance competing desires: to do whatever is necessary to assist the project, and to mitigate their own risks through control and adherence to contracted deliverables. Excessive focus on the latter can erode the time available to investigate software and reveal new information. Ironically, this opportunity cost can prevent or delay the discovery of problems and consequently increases the probability and magnitude of overruns. This is most evident on a fixed price contract when things go wrong. When facing such a project, the vendor has an incentive to minimize their exposure by not incurring additional effort and delivering only that which they are contractually obliged to deliver. Need to explore beyond the letter of the requirements? Tough, not covered in the contract. This stick-to-the-plan-no-matter-how-screwed-it-turned-out-to-be-in-reality mentality is exactly the kind of behavior that can result in further delays to the project, reinforcing its death spiral.

Let’s return to the notion of efficiency. Does a fixed price model encourage improved efficiency over T&M? It is often suggested that it might: with a cap on the payment they will receive, the vendor should have an incentive to work in a more efficient manner. Nonsense! This is true in only the narrowest of senses: the vendor has an incentive to fulfill contracted obligations at a minimal cost, thus maximizing profit. This is financial efficiency, not testing efficiency: they are not the same thing. A vendor can improve financial efficiency through “juniorization”, the use of less expensive testers. If cheaper testers take longer to achieve similar results then it is entirely possible for such a gain in financial efficiency to be bought with a decline in testing efficiency. This is another complicating factor! The vendor now has to balance the desire to reduce costs with a desire to perform. Even if the vendor favors the latter and is able to make improvements in testing efficiency this does not automatically translate into value for the client. Testing efficiency either results in cost savings, which are not generally passed on to the client on a fixed price contract (the client pays the same regardless of the actual effort), or frees capacity that can be used to test more. This is an opportunity for a vendor to invest in the success of the project and thereby their relationship with the client. However, the control culture that sometimes permeates fixed price work can introduce resistance to the suggestion that effort be devoted to anything not covered by contracted deliverables.

Despite the risks of such performance distorting issues, a large number of testing clients are demanding fixed price contracts. And why not? To most, a transfer of risk will be significantly more important than the poorly understood and intangible effects on the testing itself. For their part, vendors – not wanting to lose business to their competitors – are accepting a business model that will cost many of them their shirts. Unfortunately, those that survive will probably ascribe their success to skill without recognizing the real reason: the good fortune to escape disastrous projects.

To be continued.