In the previous post I discussed two pricing models that are generally used for contracting testing services on a project by project basis. This post will shift the emphasis to pricing models that are more commonly used for ongoing (multiple project, multiple release) testing services.
At its most basic, fixed capacity pricing means that the vendor provides a given number of testers for a set price. Such contracts are often longer term than single projects and can span years. This gives the vendor a degree of predictability when it comes to revenue and enables them to provide services at a discount versus time and materials (T&M) rates.
As testing services are provided for a set price, this model is sometimes erroneously referred to as “fixed price”. It should not be confused with the fixed price model described in Part 1: under fixed capacity there is no defined output that the vendor must deliver. Think of it as an aggregated form of time and materials pricing where testers are contracted en masse rather than by the hour.
Remember that I introduced a general rule of vendor performance in Part 1?
In the absence of other factors most people try to do their best in the hope of getting more work.
Unfortunately, there is one factor that this pricing arrangement does share with fixed price: a perverse incentive that can conflict with the above rule. As prices are associated with the service as a whole, rather than the rates of certain individuals, the vendor has an incentive to increase margins by reducing costs. The most obvious candidate for doing so is “juniorization”, the replacement of expensive testers with cheaper ones, either by substituting testers with less skilled ones or by moving work to a lower cost location, introducing organizational complexity, an increase in the number of handoffs, and the challenges of working across time zones and cultures. Such labor arbitrage ignores the skilled nature of testing, and can have a negative impact on the value of testing. I discussed this further here.
Now onto Output Based Pricing. First, let’s clear something up: “output based pricing” is a misnomer. Please find the marketing people who came up with this phrase and introduce them to the false advertising legislation appropriate to their jurisdiction. The output of testing is information and I only know of one way to price that: on the open market. Dear client, imagine the scene: the CIO of your primary competitor is sitting in a smoke filled bar enjoying a quiet drink, when a hooded stranger sidles up to her and says: “Hey, you wanna buy a bug report?”1. Whilst I am sure there is demand for Knight Capital’s bug reports right now, I can’t see a market forming anytime soon. So-called “output based pricing” has nothing to do with the outputs of testing.
Here’s how it works: deliverables, such as test strategies, test plans, documented test cases, and activities such as test case or test cycles are assigned a number of points (called variously “quality points” or “story points” – spot the bandwagon jumping). Each point has a value that translates into a typical amount of effort and therefore a price. When a client requests testing for a project, or for the release of an application, the vendor estimates the total set of testing deliverables and activities that will be required and the associated points are added up to determine the price that will be charged. In this way, output based pricing resembles fixed price: it provides a contractual mechanism for running multiple projects/releases on a fixed price basis. It therefore exposes both client and vendor to many of the risks that I described in Part 1 as being associated with fixed price contracts.
Instead of calling this model as output based pricing, it would be more accurate to describe it “artifact based pricing” or “activity based pricing”. Here’s the killer: conventional wisdom has it that this pricing model drives vendor efficiency by focusing them on their outputs! This is a tall order (and a tall tale) given that it has nothing to do with output and everything to do with artifacts and activities. Worse, by focusing on the appearance of testing it can serve to distract from the real work of investigation, discovery and communication.
To be concluded.
1 My thanks to Michael Bolton for this line.