Storage Informer
Storage Informer

Barriers To Private Cloud Adoption

by on Jun.12, 2009, under Storage

Barriers To Private Cloud Adoption

EMC logo Barriers To Private Cloud Adoption

I’ve must have had the private cloud discussion well over a hundred times with customers. Begin to fully virtualize your environment Learn to manage it more like a cloud, and less like a data center from 1989. Start to consider…

I’ve must have had the private cloud discussion well over a hundred times with customers.

Begin to fully virtualize your environment

Learn to manage it more like a cloud, and less like a data center from 1989.

Start to consider the strategic implications of the new choices you now have — everything from dynamic federation of resources to a bring-your-own-laptop program.

Mostly, it’s gone pretty well.  Far better, in fact, than any other discussion I’ve had with customers — ever.

And I’m starting to see more and more consistency regarding the structural barriers to private cloud adoption by enterprise IT groups.

Nothing Good Is Easy

Otherwise we’d already be doing it today, right?

The surprising fact is that the pushback isn’t coming from places you might normally expect.

People generally agree that virtualization — specifically VMware — will become more widespread, and eventually encompass server application, desktop environments, and even “supporting cast” infrastructure applications like backup, archiving, security, etc.

People generally accept that the days of vendor-specific RISC processors are coming to an end — as well as the proprietary variants of UNIX that they support. 

All of those workloads are going to have to go somewhere, won’t they?

People generally accept that virtualization can do far more in their environment than it’s doing today.

And — generally speaking — they want to move in the shared direction that we’re all describing — VMware, Cisco and EMC.

So where’s the real pushback?

My Infrastructure Isn’t Ready

Just now, we’re starting to see newer technologies in the marketplace that were designed for virtualization at scale. 

With VMware’s vSphere leading the parade, I’d add Cisco’s UCS and Nexus, EMC’s V-Max, as well as newer information management, resource management and security frameworks.

The problem is that these products aren’t on the data center floor today — they need to be evaluated, justified, acquired, deployed and integrated.  That’s going to take some time.  We’re working on

Sure, we can work on other things in parallel, but if we’re going to virtualize at scale, we’re going to need infrastructure in production that’s designed to do the same. 

And that’s going to take some time.  I’d encourage you to get started :-)

We’re Not Comfortable Virtualizaing Tier 1 Applications

My definition of a “tier 1 application” is “the ones that really, really matter to the business”.  It’s not about performance, scalability and reliability any more.

So, to that end, we’ve created an ever-expanding suite of EMC Proven Solutions that show exactly how you use these newer technologies to design, build and operate a tier 1 application environment for Exchange, SAP, Oracle, etc. 

More coming, but the design principles and characterizations found in these useful docs can be used in other places.

Our Software Vendors Think This Is A Really Bad Idea

Many software vendors are ambivalent about the idea of running in a virtual machine.  They’re not comfortable with it, are concerned about how they’re going to provide support, etc. 

We’ve seen this movie before.  It happened with UNIX, then Windows, then Linux, and now it’s happening with VMware. 

Broad adoption of hypervisors generically and VMware specifically is inevitable for two reasons: first, widespread customer demand, and second, it makes the ISV’s job far, far easier once you think about it.

Certain vendors (e.g. Oracle) have decided to fight the inevitable, e.g. the only hypervisor we will ever support is the one we sell (OVM), but I and others see this as misguided machismo that will certainly crumble in the face of the inevitable.

More to the point is software licensing.  The vast majority of software pricing schemes have some sort of physical entity as the basis of their value scheme: cores, sockets, servers, users, etc.  In a virtual world, all of this disappears.

My favorite example is electrical power: on the side of the house, I’ve got a power meter.  Sometimes it spins fast, sometimes it spins slow.  No one asked me to buy my fair share of the power plant, did they?  Nobody came into my house and counted all the wall plugs, did they?  You get the idea — you pay for what you use.

Getting software to change how they charge for their value is going to take a very long time.  Indeed, I see these discussions playing out inside of EMC (we sell lots of software), and it’s pretty clear that the answer isn’t clear here.

Regardless, there’s still big savings — resource and operational — to be had by fully virtualizing your data center.  The infrastructure choices and strategies are still there. 

And we can only hope that — over time — the software industry gets smarter about changing their pricing strategies.  I know we have work to do in our own house as well.

My People and Process Aren’t Ready

Here’s the really scary part — it won’t be a private cloud unless we learn to manage it like a private cloud.  Sure, efficient use of resources is nice.  Efficient, flexible, dynamic zero-touch processes — that’s the real payoff in most people’s minds.

And this is *not* going to be easy.

We have to unlearn over two decades of traditional (physical) IT thinking.  This is H-A-R-D.

But I know it can be done.  When I walk into a true cloud, telco or service provider environment, that’s where I see it being done today.  And their operational stats are orders-of-magnitude better than even the best run physical environments.

So, the question comes up — how do organizations make the transition?

Lots of different approaches are possible, but the one that seems to resonate is to stand up The New Thing.  Call it your baby private cloud, call it whatever — but it’s built entirely with new infrastructure, uses the new management tools, and uses exclusively new process and procedure.

It’s designed to operate more like a cloud, and less like a collection of physical IT.

Define a boundary of whatever workloads you’re comfortable putting on it.  Maybe it’s a bit of test-and-dev, maybe it’s a bit of decision support — whatever.  The important thing is to allow your confidence to grow in the new environment, and *not* compromise the operational procedures to accomodate traditional thinking or workloads.

Over time, the confidence boundary will grow to incorporate more and more of your environment.  The tools and procedures will mature.  And at least part of your environment will be running more like a service provider, and less like a traditional data center.

Contrast this with other approaches, and it has a unique appeal that’s starting to resonate.

Final Thoughts

Some of us have seen many IT architectural transitions before. 

In my case, this would include mainframes to minicomputers to desktop computing to open systems to client/server to web to mobile — the great part of our industry is that things change.

This is a transition to fully virtualized technology environments and private cloud operational environments. 

Sooner or later, I think most IT organizations will get very serious about how they plan to make the transition.

It’s that compelling.

Update your feed preferences


:, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a Reply

Powered by WP Hashcash

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...