Storage Informer
Storage Informer

The Future Is Here

by on Jun.03, 2009, under Storage

The Future Is Here

EMC logo The Future Is Here

Those of us who speak on IT topics often find ourselves waxing poetic about the future of technology, or the future of the data center, or something similar. We wave our hands, and talk about what could be. What happens…

Those of us who speak on IT topics often find ourselves waxing poetic about the future of technology, or the future of the data center, or something similar.

We wave our hands, and talk about what could be.

What happens when the future is here?

Do we immediately acknowledge that entirely new things are possible and set off in new directions, or do we as human beings take a considerable period to adapt to our new circumstances?

For the last few months, I’ve been focused almost entirely on private cloud concepts, and the alliance of VMware, Cisco and EMC that has formed to accelerate the adoption of these fully virtualized IT environments that form the basis of private clouds.

I don’t think anyone is really arguing that much with what I’m saying.  But changing our thinking and our actions is going to take a while, it seems …
Controversial Assertions Abound!

VCE_MAIN_DECK_APRIL_17 If you’ve had an opportunity to hear me speak about private clouds, or perhaps read my posts on the topic, you’ll understand that these concepts — collectively — represent a fundamental shift in how we think about all sorts of IT infrastructure.

If you’d like it in a nutshell, here it is: desktops and applications will become fully virtualized — not only will they be more efficient, but more mobile and relocatable.  

This means that IT organizations can choose to build their own internal version of a private cloud, use someone else’s compatible services, or any dynamic mix of the two.

There’s more to it than that — obviously — but you get the gist of it.

But once you start to unpack the underlying assertions, there are some very challenging and controversial statements we’re all making as a result.

And their collective impact — once they fully settle in — is nothing short of revolutionary.

The x64 Processor Wins

One underlying assertion is that the x64 architecture — as provided by Intel (and occasionally AMD) will be used for the vast majority of computing workloads in the future.

If you’ve been following microprocessor technology trends, you’ll realize that Intel has just provided us with a big “tock” (major architectural step forward) via the recent Xeon 5500 “Nehalem” processors.  

Not only did processor technology take a major step forward, you can easily see the potential for more cores, more sockets, and so on.

By comparison, the legacy RISC-based processors — such as PowerPC, Itanium and SPARC — look like they’ll become increasingly marginalized and irrelevant in the near future — if not already.

So, in our next-gen world, we know what the CPU will look like, don’t we?

The question is — how quickly will we collectively realize this, and move to the new world?

Legacy UNIX Becomes Less Relevant

If the underlying RISC processors become less relevant, their native UNIX operating systems become less relevant as well.

I did my first deep-dive on UNIX back in 1980 on Version 7 and the first Berkely 4.x distributions.  It was cool stuff at the time.  Fast forward: it’s now almost 30 years later.  That’s a nice run for any operating system technology, but it’s time to move on.

The future belongs to modern hypervisors that cluster and cooperate across multiple servers, preserving the legacy interfaces while setting the foundation for perhaps the next 30 years — as evidenced by VMware’s vSphere.

Where will those workloads and applications running on traditional proprietary UNIX go?  The answer is pretty clear: either Windows or SUSE Linux running atop something like vSphere.

Once again, the landscape is pretty clear once you look at it.  The real question remains as to how quickly everyone realizes it, and makes the move.

The Fabric Is Converged

In the data center, ethernet wins.  

We’ve just recently see the server vendors (HP, IBM and Cisco) announce they’re putting native CNAs (converged network adaptors) on their server motherboards going forward.

One wire will do it all: FC, iSCSI, NAS, RDMA, TCP/IP, UDP, etc. etc.

We’ve been watching it develop for a while, and now it’s finally ready to be deployed.

It also looks like the inevitable laws of silicon economics scale are about to kick in here, making other forms of connecting data center devices specialized and relatively expensive very quickly.

How many of the new SANs that will be built in 2010 will be legacy FC, and how many will be built around the new FCoE?  

I think it’s now more about human behavior than technology …

Dynamic Pooling Everywhere

We’re starting to see dynamic pooling concepts show up jsut about everywhere in the infrastructure.

VMware’s vSphere creates a dynamic pool of virtual machines that auto-provision, auto-protect and auto-optimize.

Cisco’s UCS creates a dynamic pool of compute and memory that auto-provisions, auto-protects and auto-optimizes.  Not to mention a pool of converged fabric that does pretty much the same thing.

EMC’s V-Max creates a dynamic pool of virtualized storage that auto-provisions, auto-protects and auto-optimizes.

Those are just working examples — there are others out there.

But the key concept remains — the future of IT infrastructure is large, dynamic pools — and less static, isolated components that can’t participate in these pools.  Pooled IT physical resources are potentially far more efficient to own and operate.  

But we live in a world where it’s far too easy to fund IT by the project, or hand-pick individual components that do one thing or another well.

How many people will recognize this dynamic pooling trend and “jump in to the pool” in the next year or so?  Once again, I think it’s going to be less about technology, and more about our instinctive behaviors.

Storage Gets Really Simple

Take this pooling concept and apply it to storage.  

It deserves special attention since it’s the only physical IT resource that tends to be consumed permanently and not returned for re-use as is the case with compute, memory and networks.

We’ve now got architectures like V-Max that create single, giant pools of storage if we choose — even stretching across geographies over time.

We’ve now got the ability to virtually provision very large storage objects to pools of virtualized servers.

Technologies like FAST (fully automated storage tiering) promise to permanently and significantly change the economics of storage much the way that VMware has permanently changed the economics of servers.

Indeed, it won’t be too long before storage capacity management essentially boils down to buying more capacity (put some SATA drives in the array complex, the array will figure out what to do with them) or buying more performance (put some enterprise flash drives in the array complex, the array will figure out what to do with them).

And we’ll move permanently away from our hand-crafted, application-by-application, LUN-by-LUN, port-by-port approach to storage once and for all.

But it’s pretty clear that the technology will be here far before most of us will be willing to accept that the model has changed for the better.

Operating system, compute, network, storage: how will we manage them?

Everything Becomes A Network Service

Today’s IT management model I lovingly refer to “cylinders of excellence” — they’re not stovepipes!

We’ve got the server people.  The network people.  The database people.  The application people.  The storage people.

And then we layer on top a nice ITIL-like workflow to make sure that everything flows end-to-end.  It’s that final step that seems to be the hardest in larger IT shops :-)

In a world where all IT services are now delivered over some sort of network, it makes logical sense to start thinking about everything as a network service — provisioning virtual resources, loading them up with application logic containers, managing end-to-end service delivery, securing and auditing their contents, and so on.

Driving a legacy workflow where the server staff does their thing, the data center network staff does their thing, the storage staff does their thing, the database staff does their thing — and so on — looks positively quaint and archaic by comparison.

Security — in particular — becomes wonderfully simple and elegant in this new fully-virtualized and network-centric world.  If I was a career IT security professional, I’d be doing everything I could to accelerate this transition.

Simply put — all aspects of IT management move to the network.  If you think in terms of internal power struggles for who gets to control IT operations, the network guys win.

It’s pretty obvious to most of us what the new management and orchestration model will look like.

People considering next-gen IT often ask “where are the tools?”.  The answer is that the new tools are mostly there already — however, they don’t conform to legacy expectations of how IT has been historically managed.

It’s becoming less of a technology problem, and more of an organizational and HR problem.

Learning To Manage IT Like A Cloud

I’ve become fond of saying that it’s not a cloud unless it gets managed like a cloud.

Look at other forms of mature technology infrastructure as an example: in a modern phone network, the ratio of phone calls to operators is probably billions to one.

By the way, when’s the last time you ever spoke to a phone operator?

If you’ve ever looked closely at how newer cloud-like IT services operate, you’ll notice something unusual as compared with normal IT: almost no people are around.

Most of the workflows are user-initiated and zero-touch.  Vast pools of resources are provisioned into “good-enough” chunks” that are dynamically flexed up and down as conditions (and willingness to pay!) changes.

Compare this with what usually happens today.

We do detailed requirements analysis on just about every new request.  Historical data shows that we get it wrong (too much or too little) the vast majority of the time — so why do we bother?  Why not just dynamically adjust as we go along?

We drive complex workflows across multiple organizations.  Each sub-group has to look inside their own little domain and figure out how to get it done with their own resources and manpower.

IT groups spend inordinate time in meetings, reviewing progress, identifying obstacles, working through issues.

Meanwhile, the business user is waiting, waiting, waiting.  I’m convinced that most of the reason business users like services such as Amazon’s is that there’s no IT group to talk to, and no waiting.

This sort of process and workflow arose from a world where IT physical resources were expensive, isolated and inflexible.  

Now these same resource have fndamentally changed in nature: they’ve become commoditized, pooled and dynamic.

And our fundamental processes and workflows that we use for the vast majority of IT infrastructure has to be re-thought as a result.

How long do you think that’s going to take?

Learning To Trust External Service Providers

As I talk about the potential to federate virtual machine containers internally and externally for a vast variety of motivations (economics, geographic location, business continuity, regulatory policy, etc.), the natural reaction is sometimes shock and horror.

“How could we ever trust an outside provider to do something so important?”.

I’ve learned not to attack this one head-on, but let’s look at a little history.  These same IT shops trust vendors to provide products and support.  They trust power providers, and DR providers, and network providers, and occasionally desktop helpdesk providers, and consulting providers, and staffing providers, and … well, you get the idea.

IT is a bit more hollowed-out already than many people think.  In one sense, it’s just a natural evolution of a trend that’s been going in IT and other corporate organizations for quite some time.

Sure, the external service providers need to prove they can do the job faster, better and cheaper than IT can do themselves, but it’s happened before, and it’s going to happen again with certainty.

But it’s pretty clear that learning to trust others to do their job is a key component of organizational model change.

Putting Users In Control Of Their Needs

As long as we’re sharing ideas that sometimes generate shock-and-horror, the notion of self-service access to shared computing resources and applications can occasionally strike fear in the IT manager’s heart.

Now, let’s be fair, nobody is saying that *everything* becomes self-service in IT, but I’d argue that the vast majority of IT infrastructure resource requests don’t require a 437-step approval and provisioning process.

I remember back when desktop computing first came into the work environment.  We called them “personal computers” — they belonged to us as individuals, and IT didn’t have a lot to say about what we did with them.  

That’s why we liked them so much!

Things have changed, but I’m convinced that one of the strong appeals that external cloud, SaaS and service providers have to business users is that these people can get what they want and need — quickly and efficiently — usually with no questions asked.

Took me about 3 minutes to get on Amazon’s platform.  Most of that time was spent entering my credit card information.  Hard to beat that sort of instant gratification.

Putting It All Together

Look, there are all sorts of good reasons not to pursue a certain strategic direction when it comes to IT infrastructure.  I’ve heard them all.

The technology is not fully baked yet.
There aren’t any accepted industry standards.
We’re not funded to do that sort of stuff.
We’re not organized to do that sort of stuff.
That’s not how we do things around here.
We don’t have the skills to do this sort of stuff.
What if something bad happens?

All of these are logical and natural reactions to new ideas and concepts in IT.  And many people will inevitably pursue this course for some time.

But a few — maybe a precious few — will realize that the future is now starting to get here, and we can start pursuing an entirely different model of IT infrastructure: the private cloud.

Update your feed preferences


:, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a Reply

Powered by WP Hashcash

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...