Neil Barton

Friday, August 08, 2008

Still Aiming At Top Quartile?

Like London buses, I don't blog for ages, then three come along all at once.

Following Jedd Fower's article criticising the use of quartiles in benchmarking clauses, Global Services's emagazine (page 39) has published a counter-view by the Compass benchmarker Scott Feuless. I like and respect Scott, so I talked the article over with him before posting my two cents.

Jedd's argument, in short, is that since benchmarkers use small groups of 5-10 peers, it is mathematically invalid to calculate a quartile, even if the Excel =QUARTILE() function manages to come up with an answer.

Scott's response comes in two parts. Firstly he suggests that no customer should settle for average. In this, he is certainly aligned with all 10 out of the last 10 Procurement Directors that I have met, despite the fact that Gartner, TPI, and Morgan Chambers have all recommended average, and it is statistically impossible for all customers to be better than average.

Secondly, he proposes that upper quartile benchmark clauses be implemented in a two phase process. Firstly a large group of 20-100 comparable peers is chosen, and the top quartile extracted from them. Then the benchmark target is calculated from the average (mean) of these top quartile companies, thus avoiding use of =QUARTILE() with only 5 peers.

This is an interesting approach, though not one that I have seen executed in practice by any benchmarker. For it to work, there need to be at least two conditions:

1. It has to be explained and documented to the customer and the supplier in the benchmarker's report. And here's why:

2. The larger group of peers must be normalised before you calculate the quartile. Lets imagine a case where we want to benchmark a price of $1000 per Windows server per month for 24*7 99.9% server availability. Lets also imagine that we find 20 other peers which (for the sake of argument) are exactly the same as the customer, except for the service level. Let's say that 5 of the peers have 8*5 availability, 10 have 16*5, and 5 have 24*7. It's reasonable to assume that the five 8*5 peers will have the lowest price per server, and will therefore be the peers selected to go forward as the "top quartile" to the second phase. In this second phase, these peers will presumably be normalised to match the customer, by adjusting their prices upwards to match the customer's 24*7 SLA. But now something very undesirable has happened. We left out of the peer group the five peers which most closely matched the customer, and which presumably would have required least normalisation. And we used instead five peers which were unlike the customer, relying on the benchmarker's normalisation to bridge the gap, when we could have used five better peers instead.

This would also require the benchmarker to normalise all 20 peers instead of the 5 which end up in the peer group. That's a lot of work, and benchmarkers work on fixed price contracts. I suspect this is why I personally have never seen this technique put into practice, at least not in a transparent way.

By the way, for fans of cherry-picking (the practice of benchmarking each individually priced item in an outsourcing contract instead of the contract as a whole), Scott comes out clearly against it: "it is unrealistic and counter-productive to expect every benchmarked function to fall within the 'top quartile' or 'top decile' category".

Two Recent Benchmarking References

Since we're on the subject of Alsbridge, their Outsourcing Leadship web site also has an article on benchmark clauses by Rick Simmonds. He discusses whether it's worth having a benchmark clause at all, and comes to the conclusion that it is, despite quoting sceptics as saying "benchmarking provisions have hardly ever been successfully invoked".

Also mulling over benchmarking this week was Stephen Guth on his Vendor Management Office Blog. He discusses whether benchmarks should be used as an alternative to competitive RFPs, and comes to the conclusion "on an exceptional basis, yes". He doesn't mention the arguments which are, in my opinion, strongest for using benchmarking instead of competitive tendering: it's much quicker, and it's much cheaper. And, as long as the outsourcer is kept in touch with the process throughout so that the results don't blind-side them, then it can be very effective.

A Step Change In The Benchmarking Market

Alsbridge's announcement that they have acquired Nautilus and will be actively marketing their ProBenchmark offering is the biggest step change in the benchmarking market since Gartner acquired META Group in 2005.

It's not just that the combination of sourcing advisor and price benchmarker would seem such an obvious one. It's not even the fact that Alsbridge "plan to dramatically expand the market for benchmarking via an e-commerce initiative" and say that "the secure online delivery of pricing benchmarks at a radically different cost structure will disrupt the accepted norms". I parse this to mean that Alsbridge's benchmarks will be done over the net, much cheaper than existing offerings, and therefore likely to sell more widely.

What they have constructed in ProBenchmark is not a conventional benchmark where you compare your prices with a group of other customers. Instead, ProBenchmark is a predictive tool which can calculate what Nautilus think your price will be, based on your particular combination of price drivers. They let you play with an example for managed storage pricing. This is why it is cheaper than conventional price benchmarking. They don't have to spend time searching for comparable contracts and adjusting them to match your case. They've done it already, right there in the tool.

Nautilus was founded after that acquisition by some of the most respected and experienced META Group price benchmarkers, and have become a viable third option for benchmarking in the US, behind Gartner and Compass America. (In Europe, your third choice would be more likely to be Maturity or possibly Metri.) I think the experience of Nautilus on how suppliers are likely to price is as good as anyone elses.

But the big question is: will this predictive approach be an acceptable source for executing contractual benchmark clauses, or will customers just use it as a quick and cost-effective indicator on whether their pricing is in-line or not? Most benchmark clauses are quite explicit in saying that the benchmarker is expected to go out and find comparable contracts. I wonder how long before we see customers proposing to drop this language and replace it with "in the judgement of the benchmarker" phrasing. I can't speak for any outsourcer, including the one I work for, but I think most outsourcers will strongly resist this. So will Gartner and Compass, since it's their business models that are being disrupted. So it will be nice for them to have something to agree on for once.