Neil Barton

Wednesday, May 30, 2012

How to spend $700/server/month with Amazon

Those who are attracted to the apparent low costs of Amazon's Elastic Compute Cloud should read the Economist's news of a New York man who managed to clock up a bill of over $1,000 in a day and half.  The bill was caused not through ignorance or stupidity, but an unfortunate interaction between Google and Amazon software.

If you count the time he must have spent investigating the problem, fixing it, and getting his money refunded by Amazon (shall we estimate 16 hours at $100/hour?) his true losses into the cloud are even greater.

Friday, April 20, 2012

An IT benchmarking service called Compass?

What a great idea!
Wonder if it will catch on?
http://xmg-global.com/services/compass/

Thursday, March 22, 2012

Everest call the bottom of outsourcing price declines

When, in the 60s, the British politician John Profumo denied consorting with prostitutes, one of the ladies concerned famously replied "He would, wouldn't he?" So my friends in IT consultancies and procurement will roll their eyes when they see what follows.

For more than ten years, buyers of outsourcing services have come to believe that IT unit costs will decline each year. Consultancies like Gartner, TPI/Compass, and Forrester have published research showing the amount of decline - indeed, I've written some of those papers myself.

All credit, then, to Ross Tisnovsky and Rahul Gehani at Everest, who have dared to put a contrary view. They recently published "Time to Take a Hike": Why Pricing in IT Deals Should Stay The Same, which offers three reasons why customers should pay the same or even more for their services.

Firstly, they point to underlying cost increases. They correctly point out that labour inflation is running at above 10% in some of the popular locations for delivering offshore IT services. They could also have mentioned energy costs: Keith Breed at TCL has found that data centre hosting rates have risen by 10% in the last four years, driven by more powerful servers that consume more power and the rising cost of power itself. Software licence costs are rising too, as anyone who still has an IBM mainframe will confirm.

Their second argument suggests that customers should accept inflation or COLA adjustments in recognition that labour inflation is outside a vendor's control. They put a counter argument that vendors can improve their productivity, or move offshore resources from the most expensive cities to cheaper Tier 2 cities. Both true, but worth thinking through the consequences. Improving productivity is great if it genuinely means getting the same result with less effort. Improving calls/agent/day in the Service Desk, however, clearly means shorter calls - which may not necessarily improve the service.

Their third argument suggests that service providers helped out during the 2009-10 recession by offering price reductions, and are now entitled to reap some of the benefits of economic upturns. They counter-argue that most service providers have maintained their margins, and that currency fluctuations with the rupee now favour sevice providers. In truth, I've rarely met a Procurement Director whose idea of "partnership" worked this way. The simple truth is that this argument only works as long as all the service providers bidding for your work think the same way.

But the rationale of rising costs remains strong. It's wrong to believe that Moore's Law shows that IT operating costs must always decline. The move from on-shore to off-shore delivery is something you can only do once, so it seems reasonable to expect that the pace of price decline over the last five years should now slow.

But then, I work for a service provider. So I would say that, wouldn't I?

Tuesday, August 16, 2011

The rise of the Server IMAC

The acronyms MAC, IMAC, and MACD all refer to Moves, Adds, Changes, Installs, or Deletes of computers from an estate.

It's been a growing trend for five years to see desktop outsourcing contracts price IMACs seperately from ongoing desktop support services. Desktop IMACs are one of the few IT services which cannot be automated or off-shored. You need to visit the user's desk, and the cost of an IMAC is therefore heavily influenced by the local labour costs at the user's location. Expect it to cost much more in Geneva than Warsaw.

Until recently it was much rarer to see server IMACs priced individually. Server IMACs used to be bundled in with the price for ongoing server management, since servers are much more likely to be located in one or two central locations, and are rarely moved after their initial installation.

Heavy virtualisation and cloud technologies are changing this. Now a virtual server instance may be added in one month and removed in the next. You can't guarantee that a server OS instance will continue in operation for a full refresh period of 48 or 60 months, so it's not possible to spread the cost of the server Add over the ongoing server management charge. Consequently it's much more common now to see the one-time costs of setting up a new server built into a 'Server IMAC' charge payable when a virtual OS instance is created. Refresh of the underlying hardware platforms, on the other hand, is still usually built into the ongoing charges, since the timing is largely at the supplier's discretion.

Friday, July 01, 2011

How do service providers construct their price models?

Interesting debate over on Linked In in the Financial Management Community group, started by a guy from IBM.

Wednesday, March 24, 2010

The Emperor's New Scale Economies

When I talk to IT executives, their plans for cost savings often assume that economies of scale can be achieved from centralising a function. And often they can. But not all services scale the same way, and to calculate a Return On Investment it's necessary to understand a little more about the way the scale economies will work.


Consider first the two scale curves on the right. The X axis represents the number of units managed - it could be desktops, or servers, or network switches. The Y axis shows the cost per unit. Both A and B show that as the number of units rises, the cost per unit falls. However the savings on line B are much greater.


The trajectory of these lines can have a huge impact on the business case for a project. Say that both A and B will require a $1 million investment. The savings in line B will support a much larger investment cost than A. Or, if the investment costs are the same, B will deliver a payback much sooner than A. In the current financial atmosphere, an 18 month payback is much more likely to be authorised than one for 36 months.



Curve C introduces a complication. Economies of scale do not follow a straight line, but a power or exponential curve. Therefore the savings on offer are much greater if you have less than 2,000 units, and tail off to almost nothing if you have 10,000 units.
Curves like this occur when, for example, you have to have a minimum cost to support an environment, no matter how large it is. For example, a Help Desk requires a ticket system, no matter how many calls are received. But the more calls are received, the smaller the cost of the ticket system which gets calculated in the "cost per call".
Here it's important to understand not only the shape of the curve, but also the volume at which the knee of the curve occurs. A $1 million investment may well pay off for a smaller organisation, but not for a larger one.


Finally, Curve D is more complicated again. All the potential savings are achieved between 3,000 and 7,000 units. Centralising two groups of 3,000 units each will attain substantial savings. Yet centralising two groups of 8,000 units will deliver much less, even though the investment required might well be higher.

Curves like this occur when a service is built from components which follow different scale curves. For example, the labour efficiencies in managing a service may be quite different from the efficiencies in buying software or hardware.


Scale economies can be extremely important to understand when comparing IT service prices. An organisation paying $10 per Gbyte should not be jealous of one paying $5 per Gbyte, if they are managing a 20 Tbyte SAN compared with an 800 Tbyte SAN. Not all IT services follow the same curve, and the curve can be influenced by the way an organisation defines their requirements. For example, 5,000 desktops can only be managed more cheaply than 2,000 if they are standardised and built on the same image.


When considering a proposal to centralise and consolidate, then, the IT executive should be asking not just what economies of scale can be achieved but also:
  • what is the shape of the curve?
  • at what volumes do we find the knee(s) of the curve.

Tuesday, March 16, 2010

Measuring Data Centre Efficiency

I found a rich seam of practical and clear white papers on measuring Data Centre efficiency. They come from Neil Rasmussen at APC, a supplier of Data Centre cooling and power equipment.

I had noticed the PUE ("Power Utilisation Effectiveness") metric being mentioned more often recently, but Neil describes in detail what is being measured, and how. Essentially it measures how much electrical power is consumed by the power and cooling equipment in a Data Centre, and is therefore not used to power the computing equipment. He argues for an alternative metric, DCiE (Data Centre Infrastructure Efficiency), but this is measuring the same thing in a different way. A follow-on paper by Victor Avelar has some practical insights on how to measure this.

It shows how important the whole issue of Data Centre costs has become. A few years ago, the TCO of a server could be roughly split 30% on hardware, 30% on software, and 30% on labour, with data centre hosting costs lost in the remaining 10% along with other overheads. Three things have changed:
  • As servers get more powerful, they consume more power (and therefore need more cooling, which consumes more power ...)
  • Power itself is getting a lot more expensive
  • Other elements of server TCO are falling (hardware is getting cheaper, server management is being off-shored), making data centre hosting a much larger percentage of server TCO.
More white papers from APC here and here, the latter including a discussion of Data Centre TCO. However, those I have read so far have left me hungry. They describe PUE and DCiE as measuring "Data Centre output" or "useful computing work". However what they actually measure is output of power to the servers, not output of computing from the servers to users. Granted, this is a notoriously difficult thing to measure: after 20 years of debate about MIPS, SPECints, TPC, and SAPS, there is still no generally accepted industry measure of server computing power. Indeed, server power has been so hard to count that some have semi-seriously suggested it would be easier to measure Data Centres by their storage capacity in Terabytes, rather than their computing capacity. Perhaps, even more flippantly, we can expect in the future to measure Data Centre costs in terms of kW consumed per user per day?