Tag Archives: Commoditization

Commoditization: Three Scenarios

In the previous post, I introduced the commoditization of testing. This trend devalues the skill of the tester, and has far-reaching consequences for software development as a whole.

How might this play out? What might the future hold for testers?

Here are three possible scenarios.

Scenario 1: Runaway Devaluation

The feedback loop driving commoditization continues at full bore, and having claimed the enterprise the infection spreads to software vendors. Gradually, the public becomes inured to failure, with many vendors seeing low software quality as a viable business model.

Testing failures occur regularly, but few executives connect this with diminishing skills. Testing is simply another problem to offload to third parties, and if that fails, to a different testing service provider. Meanwhile, the myth that testing is checking continues to ride high. With continued developments in model based testing and test driven development, many conclude that “the testing problem” has finally been solved by developers.

At the same time, growth in user testing and SME testing further erodes territory that formerly belonged to the tester. Eventually, The International Institute of Business Analysts – who already claim “validation” as part of their body of knowledge – subsume any testing not performed by the developer or end user. Testers find themselves caught in a pincer from which there is no escape. In one form or another testing might live on, but testing as a profession is dead.

Scenario 2: Islands of Hope

The commoditization effect continues unabated, largely conquering the enterprise. Whilst this also spreads to software vendors, some see quality as a means to differentiate themselves in a marketplace largely defined by buggy software and long waits in the IVR-hell of technical support.

Silver bullets come and go, each long on promise and short on delivery: none can resolve “the testing problem”. Whilst successful software development is best served by skilled people working together, that answer is too hard for most: many executives reach to standards and process compliance in the hope of a solution. This only accelerates the decline of tester skills.

However, a scattered handful of executives recognize this problem. Perhaps they were testers themselves, have been fortunate enough to work with skilled testers, or simply see things clearly. These individuals cluster, seeking the company of the like minded, gravitating to those organizations that differentiate themselves on quality. In this way a market niche is born, one in which a small number of skilled testers not only survive, but thrive.

Scenario 3: Generational Change

The commoditization of testing continues for years, driven by a generation of IT managers who never witnessed the power of skilled testing.

The ranks of unskilled testers swell. Lacking a demand for skill, few employers are interested in their development. Yet this cannot suppress the human appetite to learn; many testers turn to the Internet in order to develop themselves, and the Internet is owned by the passionate.

Online coaching and training – much of it voluntary – flourishes. Soon, supply outgrows the diminishing demand for skill. Many testers drop out of testing rather than serve as commodity checkers. Some leave the industry altogether, many become developers or project managers. As demographics works its magic, those who believe in a better way find themselves in executive roles. They demand testers with skill; they demand that testing be at the heart of software development. The feedback loop driving commoditization slows and finally reverses as a paradigm shift begins to take hold.

I don’t have a crystal ball, and these scenarios have no real predictive power. However, through the exploration of scenarios such as these, we can gain insights as to how to navigate the future – both as individuals and as a craft. I invite you to add your own.


The Commoditization of Testing

As testers, one of the greatest challenges we face is the perception that the service we provide is a commodity. This may not be universal, but it is endemic in many parts of the IT industry.

Is software testing a commodity? According to Investopedia; “When a product becomes indistinguishable from others like it and consumers buy on price alone, it becomes a commodity”. In order for testing to qualify as a commodity, testing services must lack qualitative differentiation. Yet testing has many sources of differentiation:

  • Approach. Testing is context dependent. Some testers will bring practices and experiences that are relevant to a project, others will not. Some will attempt to bend project context to their approach, others will adapt to its specific needs. Perhaps one day this will cease to be a source of differentiation, but not today.
  • Location. Testing involves the discovery, trade and transformation of information. When communication channels are open and information is shared, remote –even very remote- testing is feasible. When information is hidden and hallway decisions are the norm, colocation is a huge advantage. Project politics and project culture can have a significant impact on the value provided by testers in different locations.
  • Skills. Testing is a vast and growing discipline, with ample room for specialization. Again, context plays a role: different projects will require testers with different skills. Further, the range of skills required is mind boggling: modeling skills, analysis skills, design skills, communication, relationship and interpersonal skills. Indeed, testing thrives on variety – the range of insights that different people with varied skills, outlooks and experiences bring to bear. The notion that testing is undifferentiated is anathema to testing itself.

In short, not all testing services are equal, nor should they be. Testing is not a commodity.

Then where does this perception come from? It originates from the belief that testing is unskilled. The most obvious act in testing is checking, which in itself requires little skill. In contrast, skilled activities such as modeling, analysis, design and evaluation are harder to observe. This belief is as superficial as believing that the cover is all that makes up a book.

Why is this important? The view that testing is a commodity provided by the unskilled has chilling effects on collaboration, effects that can destroy the mutual interdependence between testers and project.

Let’s consider one possible scenario that illustrates how this can play out. Imagine for a moment that you are a non-testing participant on a project.  Imagine that you see testing as essentially unskilled, a simple matter of translating specifications into scripts and then executing them.

You’re a busy person, and like anyone else on the project you are probably juggling a dozen different things at any given moment. The testers keep calling you, or emailing you, or messaging you, or asking for meetings. Why do they keep bothering you?!? Can’t they just get on with scripting? Can’t they do their jobs?

Testing depends on information. Mere specifications are a remarkably low bandwidth form of communication and on their own are seldom adequate for effective testing. Testers need to ask questions, it’s what they do. Yet by doing so, the tester can fail in the eyes of their stakeholders.

This is getting crazy, the testers won’t give up. Now they are calling the PM, BA and developers. Don’t they know we’ve got deadlines? Don’t they know we’ve got enough of our own work to do, without having to do theirs as well? It’s time to channel this through a handful of people.

A common reaction to the “we’re too busy to answer questions from testers” problem is to enact a communication plan which restricts the flow of information to (and from) testers through a small group of people. Those people may not have direct access to the relevant information, so may in turn have to ask other members of the project, thus continuing the distraction and introducing rounds of questions, answers and clarifications bouncing back and forth. With communications bandwidth restricted, and telephone game distortion, the value of information available to the testers diminishes rapidly.

What’s with these testers anyway? Not only are they a nuisance, but now I’ve seen their scripts, they’re missing important stuff! And that bug that got through the other week…this can’t go on. The testers are clearly rubbish, we need to manage them the directly.

At this point, the relationship between the testers and the wider project has completely failed. It is almost inevitable that this will lead to micromanagement, a master/slave relationship that further inhibits the testers’ opportunities for success. The mechanisms for this are varied. Perhaps the absence of trust will lead to demands for excessive documentation. Perhaps process or standards compliance will be enforced, in word rather than in spirit. Perhaps unsound performance metrics will be imposed. Each introduces opportunity costs, degrading the volume and value of the information the testers can provide. Each further distorts tester behavior. Each will increase the real cost of testing. And satisfaction in the testers will spiral ever downwards until the project comes to its naturally end, or the testers are dismissed.

A further implication of commodity testing is the notion that testing services are essentially fungible, i.e. that it does not matter who provides the service, or from where. All that matters is price; this is labor arbitrage. The result is a race to the bottom, with testing service providers in the lowest cost regions of the world the front runner. Remote testing may be successful in some situations, but imagine the scenario above compounded with multiple time zones and cultural factors such as power distance. In such circumstances, success is not an option.

How might things be different? We testers need to be aware of these issues, and to realize that things can be different. We need to continuously develop our skills, to market the importance of them, and to find ways to give visibility to those aspects of testing that are not checking. The alternatives to our craft are unthinkable.