Neil Barton

Wednesday, March 24, 2010

The Emperor's New Scale Economies

When I talk to IT executives, their plans for cost savings often assume that economies of scale can be achieved from centralising a function. And often they can. But not all services scale the same way, and to calculate a Return On Investment it's necessary to understand a little more about the way the scale economies will work.

Consider first the two scale curves on the right. The X axis represents the number of units managed - it could be desktops, or servers, or network switches. The Y axis shows the cost per unit. Both A and B show that as the number of units rises, the cost per unit falls. However the savings on line B are much greater.

The trajectory of these lines can have a huge impact on the business case for a project. Say that both A and B will require a $1 million investment. The savings in line B will support a much larger investment cost than A. Or, if the investment costs are the same, B will deliver a payback much sooner than A. In the current financial atmosphere, an 18 month payback is much more likely to be authorised than one for 36 months.

Curve C introduces a complication. Economies of scale do not follow a straight line, but a power or exponential curve. Therefore the savings on offer are much greater if you have less than 2,000 units, and tail off to almost nothing if you have 10,000 units.
Curves like this occur when, for example, you have to have a minimum cost to support an environment, no matter how large it is. For example, a Help Desk requires a ticket system, no matter how many calls are received. But the more calls are received, the smaller the cost of the ticket system which gets calculated in the "cost per call".
Here it's important to understand not only the shape of the curve, but also the volume at which the knee of the curve occurs. A $1 million investment may well pay off for a smaller organisation, but not for a larger one.

Finally, Curve D is more complicated again. All the potential savings are achieved between 3,000 and 7,000 units. Centralising two groups of 3,000 units each will attain substantial savings. Yet centralising two groups of 8,000 units will deliver much less, even though the investment required might well be higher.

Curves like this occur when a service is built from components which follow different scale curves. For example, the labour efficiencies in managing a service may be quite different from the efficiencies in buying software or hardware.

Scale economies can be extremely important to understand when comparing IT service prices. An organisation paying $10 per Gbyte should not be jealous of one paying $5 per Gbyte, if they are managing a 20 Tbyte SAN compared with an 800 Tbyte SAN. Not all IT services follow the same curve, and the curve can be influenced by the way an organisation defines their requirements. For example, 5,000 desktops can only be managed more cheaply than 2,000 if they are standardised and built on the same image.

When considering a proposal to centralise and consolidate, then, the IT executive should be asking not just what economies of scale can be achieved but also:
  • what is the shape of the curve?
  • at what volumes do we find the knee(s) of the curve.

Tuesday, March 16, 2010

Measuring Data Centre Efficiency

I found a rich seam of practical and clear white papers on measuring Data Centre efficiency. They come from Neil Rasmussen at APC, a supplier of Data Centre cooling and power equipment.

I had noticed the PUE ("Power Utilisation Effectiveness") metric being mentioned more often recently, but Neil describes in detail what is being measured, and how. Essentially it measures how much electrical power is consumed by the power and cooling equipment in a Data Centre, and is therefore not used to power the computing equipment. He argues for an alternative metric, DCiE (Data Centre Infrastructure Efficiency), but this is measuring the same thing in a different way. A follow-on paper by Victor Avelar has some practical insights on how to measure this.

It shows how important the whole issue of Data Centre costs has become. A few years ago, the TCO of a server could be roughly split 30% on hardware, 30% on software, and 30% on labour, with data centre hosting costs lost in the remaining 10% along with other overheads. Three things have changed:
  • As servers get more powerful, they consume more power (and therefore need more cooling, which consumes more power ...)
  • Power itself is getting a lot more expensive
  • Other elements of server TCO are falling (hardware is getting cheaper, server management is being off-shored), making data centre hosting a much larger percentage of server TCO.
More white papers from APC here and here, the latter including a discussion of Data Centre TCO. However, those I have read so far have left me hungry. They describe PUE and DCiE as measuring "Data Centre output" or "useful computing work". However what they actually measure is output of power to the servers, not output of computing from the servers to users. Granted, this is a notoriously difficult thing to measure: after 20 years of debate about MIPS, SPECints, TPC, and SAPS, there is still no generally accepted industry measure of server computing power. Indeed, server power has been so hard to count that some have semi-seriously suggested it would be easier to measure Data Centres by their storage capacity in Terabytes, rather than their computing capacity. Perhaps, even more flippantly, we can expect in the future to measure Data Centre costs in terms of kW consumed per user per day?