Digital Shmigital – What’s All The Fuss?

Okay, I get it, we live in a world dominated by technology and information. If your organisation isn’t digital, you’re either way behind or very, very unique, e.g. a transcendental yak-farming monk. But why in the last year or so has this word  Digital been hyped as the must-have new thing, as in “What, don’t tell me you don’t have a Chief Digital Office and you’re not spending all your time and money on become digital?”

…unless you really are one of those people who still own a black and white TV, chances are you’ve been doing ‘Digital’ for a long time now.”

So just what is all the hype about? Is there something new that you might have missed? Something that all of your competitors are doing and if you don’t get on board you’ll be bought out a week Tuesday by some gazillionaire cloudy millennial with 3 employees and a hot app? I don’t think so; because, unless you really are one of those people who still own a black and white TV , chances are you’ve been doing ‘Digital’ for a long time now.

In a business context, ‘Digital’ is really about four things:

  1. Using computers to automate as much of your business as possible (and appropriate, because let’s face it, not everything can be done with a computer, like yak-farming), to save money, improve quality, reduce errors, and/or do stuff faster;
  2. Doing business with your customers electronically, e.g. over the Inter-web thingy on PC’s, tablets, mobile phones, VR headsets and whatever new user interfaces the future has in store;
  3. Exploiting the data that you have as a result of running your business, plus what is available to you from other sources – to glean insights, create value, tailor interactions, build better offerings, do better marketing, and so on; and
  4. Getting data from and interacting with more and more peripheral devices, e.g. sports watches, motion sensors, CCTV cameras, heart-rate monitors, GPS sensors, weather sensors, and so on. The data from this ever increasing list mostly feeds back into 3. – Exploiting data. And of course, some of the interactions with this ever-growing list of devices feed back into 2. – Doing business with your customers electronically.

So is any of this so earth-shatteringly new that if you don’t treble your IT budget and hire lots of really expensive Digital Consultants you’ll be out of business before you can say “Internet-of-Things?”

My contention is that it is not new at all.

Automating Business Processes. Notwithstanding the abacus, which is thousands of years old, there are real examples of early calculating machines, such as Pascal’s Calculator of 1642 that could add, subtract, divide and multiply two numbers. Or the Arithmométre, the first mass-produced mechanical calculator, by Charles Xavier Thomas de Colmar, around 1820, but manufactured until 1915. However, the first good example of a calculating machine that fits my first Digital theme from above – automating a business process – has to be Herman Hollerith’s tabulating machine, invented in the 1880s, and used to speed up the 1890 US Census. Okay, the 1880 census took 8 years to process and the 1890 census, with a larger population and more data gathered, took 6, but it was still a pretty good efficiency improvement. On a more personal note, my mother learned to program in the 1950s and went on to automate a lot of processes, for example the optimisation of street lights to improve the flow of rush hour traffic.

When it comes to doing business with your customers online, many buzzwords have come and gone, such as E-commerce, multi-channel, and now it seems we have have omni-channel, as though that is somehow a better prefix than multi. But aside from the introduction of new descriptors and buzzwords, doing business electronically with your customers and partners isn’t really a new concept, is it? In their efforts to reduce cost, improve service and attract more customers, organisations have been using new channels and new technology for quite a while. Think department store catalogues in the 1800s, or drive-through restaurants in the 1950s. In the 90s we had E-commerce: I remember a special “E-” division being set up at my company where top (read expensive and high maintenance) people were recruited from inside and outside the firm. If you’re wondering how successful this was – hint: no one talked about that division after a couple of years. Why? Because although everyone was doing it, it wasn’t that special. Sort of like creating a new division and a lot of hype to market shoes.

In the 80s and 90s I worked with a number of banks and trading firms who were introducing new channels. Some of the banks were reluctantly deploying ATMs, not because they saw them as a way of diverting high volume, low value customer interactions away from the more expensive branch channel, or because they felt they had to provide this level of convenience to their customers, but because everyone else was doing it. No kidding – ATMs were originally seen by most banks as a necessary evil and additional cost.

So in terms of Digital being about the use of technology to interact with your customers, there’s not really anything new here, other than lots of apps. It’s the app that has revolutionised consumer experience, and it’s only a matter of time before this way of working hits the business world too. (But that’s a different story here.) If you listen to the hype surrounding this digital thing, however, it’s more than just bolting on a new user interface or channel for direct customer access, but includes automation and straight-through-processing at all parts of the business (see Automating Business Processes above.)

Big Data / Analytics / Artificial Intelligence. This must be new, because Google Trends shows that before 2011 it wasn’t a thing. But hold on for just a minute… SAS, the world’s largest privately held software company, started as as a project at North Carolina State University to create a statistical analysis system that was originally used by agricultural departments at universities in the late 1960s. It became an independent, private business in 1976. SAS and many other analytics packages are being used, and have been used for decades, by virtually every Fortune 500 company, whether they manufacture legacy widgets or sell newfangled smart devices. What is new is the volume of data that can be analysed, and some new sources of data to feed into the analytics, for example feeds from social media. And it’s arguable that the sophistication of the analytics has progfressed to the point where we can call it ‘artificial intelligence’. But my contention here is that this is not a quantum leap of newness, but rather a continuation of a trend that has been happening since organisations started keeping records and exploiting the data they had collected in new and interesting ways.

For example, in the 90s a bank I was working for ran a study of their client base to determine who were their best customers. We were doing Big Data back then and we didn’t even know it! It turns out that it’s the middle income couple in their 40s with a mortgage, car loan, credit cards and overdraft who bring in the bacon for the retail banks, and not always the high net worth individuals and families. Today that’s common sense but it wasn’t back in the 20th century.

In the securities industry, the so-called discount brokers were more sophisticated than the larger, sometimes owner, retail banks. They evolved from call centre trading in the late 60s, to IVR (push 1for Sell, 2 for Buy), to PC-based dial-up access, to browser-based to mobile-based trading. I believe the low cost discount brokers such as Schwab in the US (1975) and GreenLine in Canada (1984) really pushed the envelope in their use of technology to do business with their customers. They saw technology as serving two primary needs: cost reduction and customer satisfaction, and were usually on the cutting edge of these technologies for mass consumerism.

And that brings me to the so-called Internet-of-Things. Did you know that the X10 protocol, which is used to communicate with household appliances such as lighting and heating, was introduced in the early 1970s? In 1988 a friend of mine had a hot tub at her holiday home that she could call from her carphone so that it would be warm when she got there. Apparently a Coke machine at Carnegie Mellon University could electronically file its inventory and temperature data over the early Internet in 1982. There are countless examples of connected devices enhancing our lives from the last hundred years, so you can’t tell me this is new, even though it has a cool TLA (three letter acronym). What is new is the sheer volume of devices that are going to be network-addressable, which will drive a shift to the new IP6 addressing scheme (because we’re running out of IP4 xxx.xxx.xxx.xxx addresses – who knew back in the 70s that 4.2 billion wasn’t going to be enough?), not to mention a whole new emphasis on security. Right now there are bad guys out there who have amassed large pools of compromised IoT devices – security cameras, printers, routers, etc – and can direct them to overwhelm websites in denial-of-service attacks, and other malicious activities. IoT – not new except for exponential growth – still early days in terms of how it might change our lives.

 

Fluffy white cloud computing – hype or here to stay?

Cloud Computing – Hype or Here to Stay?

Summary: Cloud computing is over-hyped but it is important and businesses will increasingly be run on cloud resources, both because new businesses will start up on the cloud and because existing businesses will slowly migrate onto the cloud.

The other day I read a review for an inkjet printer – the Lomond Evojet 2. Standard inkjet printers have about 4,600 nozzles on a print-head that moves back and forth across the paper. The Evojet has about 70,000 nozzles on a stationary print-head under which the paper travels, allowing for crisper graphics and faster print speeds, but at a higher cost of construction. It turns out, though, that over time the cost per printed page is quite economical and in many cases actually lower than competitive inkjets that may have lower up-front purchase prices. So what has Lomond done about this, and, in general, what’s a company to do when they have high purchase price but lower total cost of ownership? For a hint, let me digress even further, and then I’ll get to the fluffy white stuff.

A company once made drill-bits that lasted 5 times as long as their competitors’ drill-bits, but cost twice as much to make. Market research and real-life experience taught them that the average consumer, standing in the tools section of the hardware store, 9 times out of 10 would choose the lower cost drill-bit. This company should have stopped selling drill-bits and started selling holes! It’s a hypothetical example to make a point, but consider the very real example of Rolls Royce. You may think they sell jet engines, but they actually sell thrust, or hot air ejected out the back of jet engines, what they call and have trade-marked as ‘power by the hour’. Rolls Royce provides jet propulsion as a service, a remarkable and successful business model. They take on all of the up-front costs and risks associated with development and manufacturing, as well as ongoing costs for maintenance and spare parts. Clients avoid high costs for purchase and maintenance programmes, and can more accurately predict their operating costs. It’s a pay-per-use model where clients pay for the amount of thrust they consume. And if you’re still interested in the printer example, Lomond will provide you with a printer and charge you per printed page.

In business computing, not PCs, but data centre computing where organisations run their core business applications, there have been previous attempts to adopt this model. Some have been successful, like salesforce.com. But most have fallen by the wayside, for many reasons that we’ll explore later in this article. But in the meantime, does anyone remember ASP? And no, I don’t mean Microsoft’s Active Server Pages. I mean “Applications Service Provider” which, for those of us who were in IT back then, was widely touted as the next big thing in computing at the tail end of the 90s. Although the Internet actually dates back to ARPANET in the 70s, it didn’t really pick up and become mainstream until the mid 90s with the development of the World Wide Web and the decommissioning of NSFNET, removing the last restrictions on the use of the net to carry commercial traffic.

I can’t find the original quotes on-line any more, after all it was 14 years ago, but I remember headlines like “ASP will dramatically change the way companies do business” and “I can’t imagine why any company would ever buy their own hardware or software when they can purchase business applications on-line for a fraction of the cost and hassle”. Well, I for one can imagine why they would continue to do so. Security and legacy integration are huge issues that need to be dealt with before a company will feel comfortable running their business on shared infrastructure.

Google Trends - ASPI love Google Trends. Since 2004 they’ve been collecting and indexing data about what people search for on the Internet, and now you can perform trending searches against a whole plethora of topics. This chart tells us what’s happened to ASP. Quite likely more of the hits against ASP are to do with ASP.NET than they are with Application Service Providers, but either way it’s dropped almost completely out of the public consciousness.

Now take a look at ‘cloud computing’:Google Trends - cloud computing

Basically nothing before 2008, peaking at the beginning of 2011, and now appears to be heading fairly rapidly in the same direction as ASP. So is that it for cloud computing? Probably not. Google Trends is just a tool that tells us what people are searching for and doesn’t necessarily correlate to what businesses are buying.

Amazon Web Services was on track to earn $1.5bn in 2012, now apparently accounts for 1% of global business computing, and has 70% market share in infrastructure-as-a-service. Their closest competitor is Rackspace at 10%, with everyone else making up the difference. Everyone else is Microsoft Azure, HP Cloud Compute, Google, Oracle, IBM, Dell and numerous others. It seems that hardware vendors are recognising that more and more of their customers want to buy holes as well as drill-bits!

Amazon pretty much invented the market back in 2004, and their cloud services now run hundreds of thousands of companies, including Netflix, Pinterest and Dropbox. Even organisations like NASA, Samsung and Unilever run some of their business on AWS. Thirteen employees built photo-sharing and social networking service Instagram on AWS and sold it to Facebook two years later for $1 billion, not bad for a company who didn’t have to buy any hardware or software.

But in terms of the global spend on data centre infrastructure to run their businesses, $1.5 billion is a drop in the ocean. So are we any closer to answering the question – is cloud computing hype or is it the next big thing?

Let’s consider for a moment the nature of business over time, and who buys compute infrastructure. New businesses are starting up all the time; certainly more and more new businesses are starting up directly onto cloud computing with no consideration for owning hardware or operating system software and all of the associated management bits and pieces around it. Existing businesses are doing what they do: changing existing systems, building new ones and rolling new ones in the door. More and more, however, the enterprise architects and solution designers in existing IT departments are considering their options for deploying at least some of the new stuff onto cloud resources. So I believe that over time we’ll see an increase in cloud adoption by existing organisations, but I don’t think many companies will actually sponsor the migration of existing applications. This costs money and the business case to change things that already work is often hard to prove. But new stuff, whether it’s new build or new COTS integration, is definitely a candidate for cloud deployment.

Then there’s the consideration of so-called private cloud. To me this is a sort of cloud lite – a different commercial model for hardware manufacturers to sell you their kit. But if it comes bundled with the same tools that public cloud has – allowing users to commission and decommission compute, storage and network services and only pay for what they use – and if it manages to increase utilisation because multiple departments and business units are sharing infrastructure, and if it actually costs less than buying the hardware outright, then okay, it’s a good thing. And I recognise that many organisations will struggle to trust the security of publicly shared infrastructure, and many will also struggle with the ability to connect cloud applications to legacy applications and data, because their legacy infrastructure is in different data centres and possibly different countries to the cloud resources. But I’d be wary of things like lock-in, minimum commitments, scalability, time to commission and de-commission, and interoperability with other cloud services, and whether this was going to give me a real cloud or just a different way to pay. But to give credit where it’s due, if I built servers or storage, I’d be building and selling public and private cloud offerings too. Or I’d be going out of business.

IT Stack As-IsCompanies will not only look to deploy their business applications on cloud resources. They will increasingly look to run their businesses on-line using commercial applications-as-a-service. I already mentioned salesforce.com, one of the original ASPs and still the most successful on-line provider of sales and customer management solutions, and who will turn over $3bn in revenue this year. Did you know that in the airline industry, most of the Star Alliance airlines run key operating functions like sales, reservations, inventory management and departure control on a shared applications-as-a-service model. Or that many Swiss banks run their entire businesses on a shared platform where they do not own any of the hardware or software? It’s taking a lot longer than anyone in the late 90’s anticipated, but applications-as-a-service is here to stay and will continue to grow. Already we see more and more migration onto standard business tools like email as a service or collaboration as a service (e.g. SharePoint). This trend will only add to the erosion of traditional hardware and software markets.

IT Stack - To-BeIf you asked me to look into my crystal ball, then I’d say I have no doubt that in ten years time a majority of organisations will be sourcing at least half of their business applications as a service where they don’t own any hardware or software; and for the applications they want to own, they’ll be running at least half of these on hardware they don’t own. So if I’m right, this means that in ten years time only a fraction of the server and storage infrastructure out there will be owned by companies for their own use. This will have major implications for hardware and software companies, and if they cannot re-invent themselves, they’ll go the way of DEC, Sun Microsystems and many others.

Fluffy white computing is a big topic with many interpretations and players. I hope the above discussion has given you some insights and perhaps sparked some questions. If you would like help with cloud strategy or IT strategy in general, please get in touch.

Windows Azure – First Glimpse

Well, I finally got around to signing up for a three month free trial of Microsoft’s Azure. In case you’ve been living under a rock these past few years, Azure is Microsoft’s cloud offering of virtual servers, virtual storage, web servers and associated services such as networking (between the virtual resources), Active Directory, SQL Server and something they call Mobile Services. This last one makes it easier to set up back-end services to support mobile apps for iOS devices and Windows Phones, although from reading the documentation I can’t see how this is any different from setting up a virtual server.

Whereas AWS has a bunch of pre-configured AMIs (Amazon Machine Images) that you can choose so you can get up and running more quickly than starting up a bare server and downloading and installing web servers, databases, applications, etc., I like that Azure distinguishes between Virtual Machines and Web Sites. Under Virtual Machines you can choose from a handful of Microsoft servers (SQL Server, BizTalk, Windows) or a very small collection of Linux servers (CentOS, Redhat and Ubuntu 12. Web Sites lets you choose from a collection of pre-baked application environments including the standards like Joomla, WordPress and Drupal but also some others for photo galleries, blogging and commercial web sites. AWS has an awesome array of choices, but they don’t make it easy for you to shop them. Azure has a much more limited selection, but it’s easy to browse through the list and see what’s on offer, almost like an app store. The breadth and depth of what’s on offer can only get better over time.

My first impression is that the user interface is very slick, intuitive and functional. Slightly better than AWS and way better than HP Public Cloud, which is a bit old school and dated, even though it’s the newest kid on the block. I was able to set up an Ubuntu virtual machine quite easily, even though it took me three tries. The first two times I tried uploading a .pem file to use for authentication (the same file I use to authenticate on my AWS servers), but this was rejected as not being X.509 compliant. There is nothing readily visible on the site on how to generate a compliant key-pair, so I’m left with standard username and password authentication. No worries, I’m only using this for a bit of mucking about to see how it works. But they really shouldn’t let users skip this important security step without giving them the implications and an option to do it right.

It took a lot longer for my server to be set up than on HP Public Cloud or AWS, about 5 minutes vs 2. And then, frustratingly, once it was finally created, it was in STOPPED mode so I had to figure out how to start it. I really like, however, that Azure lets you choose something meaningful for the first part of your server’s url, e.g. myserver.cloudapp.net. AWS dictates something unmemorable like ec2-46-xx-yyy-zzz.ap-northeast-1.compute.amazonaws.com. And with AWS you can get a public IP address to assign to the long url but you’ll pay an extra $0.01 per hour for the pleasure. HP Public Cloud gives you a public IP address but doesn’t charge you any extra. I think Azure gets it right here, although I’d also like to see the option of having a fixed public IP address.

When it comes to global regions, HP Public Cloud gives you three US-based options, AWS gives you the most with three in the US, plus Ireland, Brazil, Tokyo, Singapore and Sydney. Azure comes in the middle with US East, US West, Southeast Asia, East Asia, North Europe and West Europe. They don’t provide much of a clue as to where these servers are physically located. This might matter to some companies for data protection or other regulatory reasons, though companies with those types of concerns probably shouldn’t be putting their applications and data on the cloud in the first place. The thing that slightly irked me, however, was that I set up a virtual Ubuntu machine in North Europe, but when I started up a remote desktop session and surfed to whatismyipaddress.com, it showed me as Microsoft Corporation in Wichita, Kansas! This is important for companies who want to situate their services as close as possible to their users for cost and performance reasons. Maybe there’s a way to specify connection points to the public Internet but it wasn’t immediately obvious.

Lastly, and it’s just as much of a show stopper for me on Azure as it is on HP Public Cloud, there is no easy way to suspend a server and stop paying the hourly charge. Here is a direct quote from the Azure help pages: “You are billed for a virtual machine that exists in Windows Azure whether it is running or stopped. You must delete the virtual machine to stop being billed for it.” To my simple and cloud-indoctrinated mind this just doesn’t compute. Virtualization and cloud are about pay for use, like an electricity model. Pay for as much as you use. Each time you turn off all of your electrical devices and stop drawing electricity from the grid, you don’t have to tear out all the wiring. No, you just carry on and when you switch the lights back on, you start paying again. On AWS they give me some choices: STOP, which equals suspend, and means I don’t pay the server hourly charge, just some nominal, minuscule fee to store the virtual image, and which also means I can later restart the server from exactly where I left off; or TERMINATE, which stops and completely deletes the virtual server. But it seems Microsoft wants to charge you for the virtual server even though you’re not using it. Which means either they haven’t figured out all  the automated provisioning, scripting, clean-ups, billing, etc.; they want to discourage this type of behaviour; or they’re hoping to gouge their users by charging them for services they may not be using.

I’ll carry on playing with Azure until my three month free trial runs out, but I won’t become a paying customer until Azure offers the ability to suspend virtual images without paying the run-time charges.

App-ifying Business

Between the Apple, Google, Blackberry and Microsoft stores there are over 1 million apps that you can download to a handheld device to do anything from playing games and watching movies to managing your finances and booking travel. Consumers can perform thousands of tasks using their smart devices, to the point that PC sales are declining relative to historical trends while sales of tablets and smart phones are going through the roof. Consumers are doing more on smart devices and less on traditional form factor PCs. But so far, with limited exceptions, business users continue to perform the majority of their day to day work-related tasks on desktop and laptop PCs. There are a number of reasons for this, including usability, security, suitability and other functional reasons; but there are other less tangible constraints such as cultural inertia and the inability of IT departments to react quickly enough in retrofitting new end-user technologies onto legacy business systems. Technically it can be done. But IT departments are notorious for getting stuck in their ways.

I have no doubt that in five years time a majority of work functions will be initiated / performed / managed on smart devices. These devices will be a mix of tablets, phablets, phones and a new breed of laptops and PCs. This new breed of PCs will be more like tablets than traditional PCs in the way you buy them, the way you put applications onto them, the security model and the way software is updated. The big difference will be in how applications are developed, distributed and accessed by business users.

Today, the part of a corporate system that users see is usually a laptop or desktop PC with a proprietary and standardized configuration, or build, of Windows with a collection of specific office and productivity tools, email client, browser, anti-virus software, third party and custom-built fat client applications, etc. These are usually doled out and supported by an in-house or outsourced IT department. Change is slow, and it seems that once you’ve upgraded from XP to Windows 7, it’s time to contemplate Windows 8, fearing that once that update is done, there will be a new version to roll-out.

But what if the business user experience more closely matched the consumer experience? What if you could go to the Apple app store or Google Play store, search for and download your company’s app. Once it’s loaded onto your device, you authenticate and voila – your corporate workplace is available and you can perform all of the tasks you are authorized to perform. On any compatible device, phone to PC. All of the hassles associated with keeping everyone’s desktop up to date have just vanished. To a certain extent it’s already happening. Users – the bane of some IT departments’ existence – are out there buying all manner of the latest devices and figuring out how to access corporate email, collaboration and other services. So it’s users that will drive this.

How much can companies save with this approach? It’s hard to quantify because of the wide number of variables. Application development and support costs will temporarily go up but should stabilize and return to where they are once IT departments figure out how to do this. Infrastructure costs should go down. If the average annual cost per seat for a desktop or laptop PC, including hardware, software and support is $1,000, then after the migration to the app model has happened there is no reason why this number shouldn’t be halved. If you have 20,000 seats then that’s a cool ten million. But cost savings won’t be the only driver. Most companies will do this because it’ll be easier, and they can get staff to buy their own equipment – BYOC! And staff can work from anywhere.

So what does this mean? It means that the next big thing in IT is going to be the ‘app-ification’ of business. Once it starts it will be bigger than Y2K, bigger than Cloud. Companies will scramble for expertise, resources, quick wins. Careers will be launched and made. New companies purporting to have the magic answer to appifying your business will come out of nowhere. The big IT companies will reassuringly tell you that they’ve been working on this for years. Who should you trust? Who should you go to? It’s too early to tell. Certainly there is a lot of expertise in India and China in building apps for smart devices. So there will be a lot of work done there. But the business apps will need to integrate with legacy systems, which implies that existing application support teams will need to be involved.

The Transformation IS the Solution!

Very few transformation programmes execute on time. Put another way, the vast majority of transformation programmes have failed to deliver the sold benefits of a deal 3 or 4 years down the line, even though most programmes promise benefits within 18 months. Why is this?

A common cause of failed outsource transformation programmes is that the assumptions made about time, cost and dependencies are bad assumptions. But in order to appear to have the best price and faster time to benefits, most suppliers will make outlandish assumptions. And the standard RFP processes followed actually encourage this type of behaviour!

Most IT sales people sell the end-game, the FMO. “Look at how good our flexible, scalable, resilient, virtualised, consolidated, cloud-based shiny new thing is. And here’s a price list for our portfolio of services.” The problem is that organizations are not starting from scratch, they are not in a position to buy a portfolio of offerings off the shelf; they have a going concern that must be taken on and transformed into something that is more flexible, more performant, more scalable, more secure, more resilient and lower cost. Whether this was ever a reasonable expectation to begin with, it will take years and there will be roadblocks.

Imagine a 5 year cost stack. If we assume that transformation takes two years, then we should be able to predict costs, given a stable baseline, in years 3 through 5, with some level of precision. But that’s only if we get to the predicted operational model with the major transformations completed. So why do we continually fail in transformation? The big issues seem to be:

  1. There is a lot of complexity in years 1 and 2. We tend to under-estimate the difficulty in making the necessary changes happen.
  2. Conflict with other changes happening in the business.
  3. Cultural inertia. People resist change.
  4. Transformation costs money. Before the delivery account in an outsource relationship can spend money, they need to overcome internal obstacles and produce a business case. But the justification for making the investment in the sales phase rarely matches the inheritied situation in the real world: original assumptions around costs were probably understated and those around benefits were overstated.
  5. The level of precision for mapping out the projects, timelines, costs and dependencies – in other words the transformation – is directly proportional to the knowledge of year minus-1 (the current situation), not to mention the skill and experience of the planning team.
  6. Account management in the start-up phase of an outsourcing relationship is focused on stability, setting up of governance, reporting and communications channels – there isn’t a priority on making major changes that might upset the sought after stability.

IT suppliers can differentiate by convincing customers that the real solution is not what lies at the end of the yellow brick road, but in the journey to get there. This presents challenges. To differentiate, the IT supplier has to be good at transformation in the first place! Suppliers have lots of experience but not a great track record in delivery. The good news is that they can get good at it but not without more corporate focus, leadership, training and reassignment of expertise currently focused on FMO.

IT Service Management – What’s Important?

IT Service Management is a collection of processes and disciplines used to manage the delivery of services covering infrastructure, applications, and business processes. When I say manage I mean ‘oversee the delivery of day to day services and ongoing change to ensure delivery of agreed services at agreed pricing and to agreed service levels’.

The most important aspects of Service Management are requirements and measurement. You need to know what you want to achieve from service management, i.e. the success criteria, and then you need to measure whether or not you’re achieving these. Of course you also need other things like the flexibility to change requirements, but if you know what you want and how to measure it you’re on the right track.

From my many years of experience in designing, selling, delivering and troubleshooting the management of outsourced services, I have arrived at a simple but powerful model that can be used to define most IT Service Management regimes.

Hierarchy of what is importantStarting with the end users we see that what is important to them is that it just plain and simply works. And when it doesn’t work, it gets fixed. Moving up the food chain to IT management, then further still to top level management, we can arrive at a short list of what’s important and therefore what we should be measuring in any service management regime. A question-based approach appears to work. Basing a service level regime on these criteria is a good place to start. There are implications, however. Most IT service providers measure component availability in the data centre, but not application availability from the user’s perspective. Tools do exist to do this and many companies deploy them, but it is often culturally difficult to base a commercial service level contract on them. Suppliers aren’t always in control of the end-to-end infrastructure, e.g. Wide Area Network, and clients often find it difficult to let go of detailed component availability metrics. But if the critical services are available to end users for the agreed times at the agreed performance, should you really care whether all servers were available 99.99%?

 

Considerations for Data Centre Migration

The vast majority of applications, once implemented into production, will never move. This is for two simple reasons: 1) It’s difficult and therefore costly and risky, and 2) it’s hard to build a business case to fix something that isn’t broken.

Migrating business systems from one or more locations is a complex undertaking that involves a number of issues including connectivity, application compatibility, shared data, inter-system communications  OS compatibility, hand over of support mechanisms, and more. The level of complexity is usually aligned to the business criticality of the associated systems. If the business can be without the system for a few days, then by all means, unplug it, throw it in the back of a truck, drive it to its new home, plug it in and fiddle with the network connections until it’s back on-line  But for systems where the business cannot tolerate more than a few hours of downtime, or none at all, you’re looking at a whole new level of complexity, time, cost and risk.

At a high level, data centre migration is more often an exercise in replicating application code, data and connections, rather than in migrating physical hardware. There is no one-size-fits-all approach; a number of migration strategies exist. In some cases where the business could survive without the system for two or three days, physical ‘lift and shift’ may be the recommended approach. However, for most systems the approach will be more complex, cost more, take longer but be less disruptive and less risky. These various approaches are outlined later in this article.

Any system migration does introduce risk. The objective of a chosen approach is to balance the cost, time and effort involved against the allowable risk to the business. It is therefore necessary to understand the potential impact to the business should the migrating system be unavailable for a period of time, and to involve the business stakeholders in deciding which approach is appropriate for each business system or grouping of business systems. This implies that a critical objective in the analysis and planning phase of a migration programme is to facilitate the decision making process with clear and business-appropriate inputs. These might include: a) Per system or system grouping, the assessed criticality and impact should the system be unavailable for longer than agreed during the migration, expressed in financial and reputational terms; b) Analysis of the migration approach options, with a recommendation, for each system or system grouping, in terms of time, cost, dependencies, assumptions and risks; c) A high level timeline for the migration, overlaid with the larger ongoing change programmes that may be in-flight or contemplated; d) A business case for the programme that proves quantitative and qualitative benefits; e) An overall programme risk plan covering commercial, financial, resourcing, third party and other foreseeable dependencies.

Some of the issues that need to be addressed when designing a data centre migration:

System groupings – There is usually a strong correlation between the time a business system has been in production and the number of connections it supports and is dependent upon. Many systems do not stand alone and therefore cannot be migrated individually. There may be inter-process and inter-system connections, real-time and batch, that work well on the data centre LAN, but which will run too slowly over a WAN connection during the migration programme. There may also be shared data or gateway dependencies between systems, meaning that both have to move at the same time. For example, a mainframe and a number of midrange systems may share the same channel-attached storage arrays or SAN. If you move the mainframe and SAN, you probably also have to move many of the midrange systems because they need local SAN speeds for data access. There will be many reasons why one system cannot migrate independently of others and so must all be migrated on the same week-end. The analysis to uncover this complexity and create these logical system groupings is perhaps the most difficult piece of work in the data centre migration analysis and planning phase. The success of the overall programme depends upon getting this right: target platform configuration, migration phasing, resourcing, cost estimating, and more.

IP addressing – Older systems, and yes, even more modern ones where there was a lack of architectural discipline, may have hard-coded IP addressing, socket connections, ODBC connections and other direct access methods embedded into code. Also, many data centre architectures use private address ranges, meaning that if more than one data centre is involved in the ‘as-is’ world, a new scheme is needed in the ‘to-be’. These issues just add to the overall complexity. A strategy is needed: clean things up before, during or after the migration?

OS compatibility – Many system environments contain a mix of various flavours and release versions of Unix, Linux, Microsoft Server, Z/OS, iSeries, P-Series, Tandem, and so on. Since the chosen migration approach for many of these systems may be to replicate them in the target location, you may not want to source, install and support old operating systems, some of which may no longer be supported by the vendor. And if you do need to install older operating systems, these may not even run on the newer hardware you want in your shiny new data centre! Replicating onto up-to-date operating systems will almost certainly introduce application compatibility risks that can be avoided by not doing an OS upgrade in conjunction with the migration. In some cases it may be advantageous to perform an OS upgrade prior to migration, in other cases it may be better to move onto an exact replica of the older operating environment and upgrade at a later date. Or you might get lucky and be able to migrate directly onto an updated platform. Discussions with applications providers, testing and trade-off analysis between cost and risk is needed.

Application Remediation – The chances are that applications and databases as they are currently implemented will not port straight away onto the target operating environment. This is for many reasons, for example the application may not be compatible with the newer hardware and up-to-date operating system version in the target environment; or it runs on an old release of a database or 4GL framework. Or you may be migrating the application from a physical to a virtual server. You may even be planning to completely change the underlying architecture of the application to leverage cloud compute, storage, database and communications constructs. For example, today the application may attach directly to an Oracle database but you want to migrate onto a cloud platform and leverage something like Amazon’s RDS (Relational Database Service). Or you may want to consolidate several databases and database servers onto a database appliance. Whatever the reason, and even if you’re migrating to a like environment, you will have to involve inhouse and third party applications providers and support teams, and conduct testing to determine how much, if any, remediation is necessary. Remediation can include steps from a simple re-compile all the way to re-write or even replace. Only thorough analysis and testing can help you decide which is required on an application by application basis. For high level planning purposes, a rule of thumb is that 10% of applications and databases will port with no remediation necessary, 30% will require minor remediation, 20% medium, 20% high levels of remediation, and between 20% and 50% of the applications portfolio will be put on the ‘too difficult’ pile and be migrated more or less as-is onto legacy platforms that you hoped not to be filling up your shiny new data centre with!

Latency – Systems migration may introduce additional round-trip communications time, depending upon where the new data centre is, where the users are, and other factors such as network quality, link bandwidth, router quality, hops, firewalls, etc. Therefore, an important step in the detailed planning phase is testing! Perform proof-of-concept testing between users and the new site. Hire a network specialist to do an in depth analysis of your existing network situation and the implications for migration to the target site. In most cases this will be a be a non-issue, or certainly nothing that cannot be addressed through proper design or the use of WAN optimisation technology.

Time and resources – The limiting factor in data centre migration is not usually hardware or software, but is the amount of change the business can sponsor and absorb at any given time. Migrating a system involves the input and participation of business staff, internal and external application support teams, network and security experts, hardware specialists, software vendors, network providers, project managers and so on. Much of the work can be done by dedicated project staff but there will be dependencies upon people with day jobs who may not see data centre migration as strategically important to the business. The success of the migration will depend upon proper resource planning, dependency management and governance to handle issue resolution and prioritisation. And it will take longer than you think! Rule of thumb: 2 to 4 systems or system groupings per month, following 3 to 6 months of planning. So if you’re moving 20 systems, the whole project may take anywhere from 8 to 16 months.

Migration Strategies

When migrating systems from one data centre to another several migration strategies are available. Choosing the right strategy is influenced by an understanding of the following factors:

  • Criticality of the system or system grouping
  • Financial risk should the system be unavailable
  • Reputation risk should the system be unavailable
  • Number of systems involved in the grouping
  • Allowable downtime
  • OS version and compatibility of application code with newer version
  • Database version and compatibility issues with newer OS, newer hardware
  • Size of the data sets and databases
  • Number and complexity of inter-process and inter-system communications, real-time and batch
  • Nature of inter-system communications, i.e. remote procedure call, socket, asynchronous messaging, message queing, broadcast, multi-cast, xDBC, the use of message brokers and enterprise service buses
  • Amount of remediation necessary to move the application or database onto the target platform
  • Security domain considerations, the use of firewalls, and the associated constraints
  • Availability of spare hardware
  • Existing backup and restore architecture and capability
  • Data replication architecture
  • Deployment topology: standalone vs clustering, single site vs multiple sites
  • Distance between old and new site
  • DR strategy and existing capabilities; are you trying to fix this during migration?
  • Rollback / back-out strategies available
  • Life-cycle for software stack components
  • Software licensing cost implications
  • Life-cycle of the existing hardware
  • Network bandwidth
  • Opportunity windows
  • Protocols being used between different systems
  • Service levels
  • Storage architectures: direct-attached vs. network-attached vs. SAN
  • User community locations and connection methods

As you can see, this is a fairly lengthy list of things to consider during the analysis and planning phase! Certainly gathering as much of this data as possible and immersing yourself in it will help in analysis and choosing the most appropriate approach for each system or system grouping. These approaches are discussed below:

Lift-and-shift: Physical migration is the simplest form of moving a system to a new environment. Switch it off, move it, plug it in and hope it works. Since the system has to power down for the move no data synchronization issues will arise because no new updates could have been made due to system unavailability. This strategy can only be used when there is sufficient time available for the whole process. Where high availability is required with no allowable down time, then clearly this strategy will not work. This is a highly risky approach. If for any reason the shipment of the system fails or the system can not be restarted at the new site or there is a network connection issue, no rollback / back-out option is available other than to ship it back and hope it works at the old site!

Re-host on new hardware: Re-hosting on new hardware mitigates most of the risks associated with the previous strategy of ‘lift and shift’, however the cost is much higher, since you’ll need to buy all new hardware. It may be possible to buy some new hardware and re-purpose the older hardware for the next wave of migration, but this will depend upon how old and fit-for-purpose the hardware is. Installing new hardware usually requires installation of the latest OS and other software, since often the old OS may not run on the newer hardware. This may lead to extra licensing requirements for upgrading other software components in the management stack. Porting to new hardware can have the advantage that the hardware is usually faster and can support more applications, potentially reducing the amount of boxes needed and reducing the overall software portfolio required, thus reducing licensing costs. The risks involved in this strategy are lower than with the first; a rollback / back-out solution should be built into the design and tested thoroughly. Compatibility between the application and the new software stack on the new hardware can be fully tested before cut over is done. Data synchronization can be an issue since the data needs to be moved from the old to the new environment while the old system is still processing updates. There are various ways to solve this synchronization issue, such as asynchronous replication, log-shipping, or just cutting off any further updates on the older system, performing the data migration and cutover, and carrying on on the newer system.

Swing kit: Using temporary hardware is similar to the re-hosting strategy except it involves double the effort. This is because, once the application and data are moved onto the temporary hardware, the original hardware is moved and the swing kit is swapped out. This strategy can use pre-production testing or development hardware, or hardware borrowed or leased from suppliers. It doesn’t matter as long as it is suitable for the migrating system or system cluster. Wherever possible you should avoid having to migrate the applications and data twice. The associated time and cost, as well as the additional risk to the business, are likely higher than purchasing new hardware in the first place. But there will be scenarios where borrowed kit is a viable option, usually where the box is quite large like a mainframe or Superdome.

Move DR first: This strategy involves moving the disaster recovery hardware first and then migrating onto it. This strategy will work if there is a fully configured and available DR system, and if the business can tolerate the risk of downtime and lost data during the time the DR system is being moved and tested.

Half cluster migration: This strategy works where the migrating system is currently deployed in a high availability cluster such that it will continue to support 100% availability if one half has failed. Take the redundant half down, move it, bring it up in the new site and re-attach it to the old site, then take down and move the other half. There are a number of dependencies and potential issues associated with this strategy, mostly to do with the fact that many high availability architectures have a live-live configuration for application servers but the database is in live-backup mode, meaning the application servers at the new site would have to access the database at the old site. This may work but usually going from SAN-attached storage to WAN-attached storage is too much of a penalty to pay and application performance degrades unacceptably.

Many to one: This is a derivation of the re-hosting on new hardware strategy. Quite often the new hardware is bigger and faster, and so through hardware sharing or virtualization you may be able to re-host multiple applications that used to run on individual servers onto a shared physical environment, reducing cost and complexity.

Virtualised Image Movement: If you already have a number of virtualised server images, then you may be able to migrate these fairly easily. But before you set up the new virtual server environment and start moving images, however, you’re going to have to consider what the applications on the server are doing, who or what is accessing them, what other systems are called upon by the application, how the application accesses data and whether this data needs to migrate before, during or afterwards. If you move the virtual image but not the database because other applications need to access the database, then the migrating application will need to access the database over a wide area connection. Will this work? Hmmm, not as easy as it seems then!

Cloud: there, I said it, the C word. The state of the art in private and public cloud offerings has advanced tremendously in the last few years, to the point where most organisations should seriously consider migration of suitable workloads to cloud computing. This will involve some work in the applications space as the applications will likely have to be rebuilt. But if your objective is to get out of an existing data centre, then moving to a new data centre may not always be the only option.

If you have a data centre migration project and would like some help in planning it or running it, please get in touch.

Innovation in Outsourcing – What Clients Want

There is no shortage of innovative thought in outsourcing or most business endeavours. Lots of sales, technology and consulting folk can talk the hind legs off a donkey when it comes to sounding innovative! But companies that buy outsourcing want more than lip service or to see whizzy stuff – they want to know what it means for them:

  • How is the service provider innovative? Do they have an R&D budget? Do they have continuous improvement programmes for general products and services?
  • So what if the service provider can demonstrate ‘being innovative’. What are the real world benefits a client will see? How do any generic product and service improvements filter down from the supplier’s general portfolio into the specific day-to-day operational environment for clients?
  • What formal and informal mechanisms will be in place to ensure these benefits are realised? Will the supplier be measured on innovation, and are they willing to put it in the service level agreement? Are there appropriate triggers and mechanisms to bring about changes that may result from innovation? Are there contractual elements such as innovation forums, technology roadmaps, client satisfaction surveys with increasing year on year improvement targets?

There are many stages in the outsourcing lifecycle where clients want to see evidence of innovation. In the early days of courtship, they like to meet and hear from innovators, futurists, generally people passionate about the future who can put on a good show. They like to be taken on site visits to see flashing lights, eye-on-glass, development labs and so on.

During the solution development phase, clients would like to understand what is innovative in the base solution. Is there anything new and exciting what will happen in the first year, or is it just walk-in-take-over and stabilise? Next, from a general perspective, how will they benefit from corporate investments made by the supplier, and how will these filter down to their specific situation? And then, specific to their operational solution, what mechanisms will be put into place to bring continuous improvement and foster innovation within the teams, tools and processes serving them? Will there be obligations on the supplier to hold forums, bring forward improvement plans? Will there be a joint innovation team? An incubator fund?

A good example in many outsourcing situations is the Services Catalogue. You’d think that the UK’s largest and most prestigious engineering firm would want its employees to have fairly modern technology on their desktops. This is important for productivity and also because younger recruits might actually want to work elsewhere if they can’t have the latest toys. But as late as 2010 they were still mostly running Windows 2000. This isn’t because their IT supplier didn’t have the technical capability of upgrading them to XP or Windows 7, but because the contract was too inflexible. There weren’t mechanisms put in place to do something as simple as add new configuration items to the services catalogue while figuring out how to pay for migration and applications re-mediation. Finger-pointing and arguments about who should pay arose from day one and somehow, ten years later, the problem still existed! So it’s a good idea to not under-estimate the importance of negotiating workable mechanisms for change into outsource contracts!

An outsource provider I worked for used to have a tool in place to measure client satisfaction along a number of key dimensions. Every three months, senior people in the client organisation were interviewed and asked their opinion, using a four point scale, on the following key criteria:

  • Are we innovative?
  • May we use you as a reference for other clients?
  • Would you renew with us?

If the answers weren’t positive for any of these, a so-called go-to-green plan was launched and tracked at a very high level in the organisation. These are non-binding, subjective criteria and you also need more objective KPI’s and service levels on which to base a commercial agreement; but in terms of motivating the right behaviours when the relationship started to go south, this was very much a good thing.

How many times have I come across an existing long term outsourcing relationship in the final year, leading up to re-tendering, where the client complains that the supplier hasn’t shown any innovation, never delivered on even half of the transformation benefits that were sold, and only now, with the renewal date in sight, has anything stirred in the innovation camp? The answer is lots; indeed, almost all the time. This is a real shame because with proper foresight, by including certain obligations, measurements and change mechanisms up-front, clients will be happier and suppliers will increase their chances of renewal.

Why Outsource?

Why would an organization want to get another company to run their IT? Here are some obvious as well as less obvious considerations:

  1. Reduce costs. In theory, an IT services company with low-cost factory models in the standard IT disciplines – server and storage hosting, data centre management, network management, end user computing, service desk, apps maintenance and development – can use leverage, shared service centres in lower-cost countries, automation and standard processes to deliver IT services that are more consistent and less expensive than most companies can provide themselves.
  2. Strategic vs. Tactical. By outsourcing the provision of tactical, increasingly commoditised services, the IT department will have more time to focus on helping the business achieve it’s strategic objectives. This requires a sometimes painful change in focus in the retained IT group – letting go of hands-on technology management and focusing more on relationship management, benefits realization, value-based governance and supplier management.
  3. Re-balance the run / change ratio. While not necessarily spending less on IT, an organization can spend less on running the business and more on changing the business.
  4. Consistency and coverage. Many inhouse IT groups do a good job in head office and large locations, but less well in smaller, remote or regional locations.
  5. Risk management. Outsourcing contracts have well defined services with service levels, service penalties, indemnities and liabilities. IT service providers also usually have breadth and depth to cover serious incidents that might otherwise not be covered by the inhouse IT group.

Less obvious reasons:

  1. Stir things up. Create the impetus for change. Historical, legacy relationships between IT and business are stuck in old patterns. An outsourcing breaks up the scar tissue, brings in new thinking and ways of working, and provides opportunities (creates excuses) to change relationships and ways of working to get big politically difficult things done.
  2. Re-shape internal IT.  Rather than being the go-between, internal IT can become a professional services provider themselves. Outsourcing brings more visibility, transparency and control over services and assets, allowing IT departments to provide services, service levels, a services catalogue, reporting and monthly billing to their business customers. It can help remove grey areas where work seems to get done but not accounted for, or where business units feel they are paying for services that they aren’t getting, or aren’t receiving to a high enough standard.

Thoughts on the new HP Public Cloud

As a user of Amazon Web Services (AWS) I thought I’d give the new HP Public Cloud a try. First of all, my use case is pretty simple: I’m a casual user who wants to be able to fire up a server now and then in various regions around the world to host my son’s Minecraft world, to run a remote desktop, or to run a remote VPN server. I want to be able to turn them on when I need them and turn them off when I don’t, without losing the server, its software, configuration or data.

AWS lets me fire up a server in about 5 countries: Brazil, USA, Ireland, Singapore, Tokyo. I can choose a Linux or Windows operating system, with a vast array of pre-configured server and software images available. Once a server is up and running, I can suspend it and not incur the hourly charge. When I need the server again, I can start it up within seconds. It’s a great service, very reliable and very cheap. I also use Amazon Simple Storage Service (S3) and Simple Email Service (SES) which are also excellent and cheap.

So when HP announced their Public Cloud Beta, I was excited to dip my toe in the water. Here is what I found:

  1. It’s pretty simple to start up a server, although at this stage they only have a few regions in the USA.
  2. They give you a public IP address with your server, which is a good thing. Amazon gives you a long string they call ‘Public DNS’ which works fine, except when connecting from the Windows Minecraft client, but if you want a standard public IP address they charge you 1 cent per hour. So far so good for HP.
  3. Pricing is vary competitive with Amazon. You’d think they would try to undercut AWS pricing to gain some market share, especially when AWS has so many more features and global regions, but hey, it’s still pretty cheap.
  4. Here’s the deal-breaker, at least for me: HP Public Cloud offers no easy way to suspend a server when you don’t need it, and come back later in a few hours, days, weeks or even months, restart the server and pick up where you left off without having incurred any charges except a very minimal charge for storing the virtual image. On AWS I can do this with two mouse clicks. On HP it’s still possible, but you have to be a command line guru and run complicated scripts to take a snapshot of your server and store it for later use. Not easy and not user friendly. Maybe this won’t matter to users who have no need to suspend servers and later restart them. But for me it’s a big deal and means I won’t be using this service.