User-Centred Design was brought to my attention a few years ago when I was client-side in Ecommerce at Virgin Atlantic. Our goals and strategies always focused on the business and IT efficiencies, so when the EMC User Experience team introduced me to the concept of a
I wanted to hear what the industry experts are saying on the subject and had the opportunity to attend a talk on ¡¥The People Behind User-Centred Design ¡V and why they hold the keys to your future¡K¡¦ at Cass Business School in Moorgate on Tuesday evening. I anticipated a creative audience of Art Directors, Designers and User Experience experts, so was surprised to see a mainly business and IT audience in a fairly sober environment.
The talk was held in a lecture theatre with a panel of speakers moderated by the editor of WIRED magazine, David Rowan. His panel included the Creative Research Fellow at
They each gave a three minute synopsis on what User-Centred Design means to them and the importance of design-led development, citing the new digital work Reuters has done at Reuters Labs to feed ‘the eco system’ that companies like Apple have created. They spoke of McLaren Automotive Group in-car system designs where systems have been optimized for the engineers to allow pit stop changes to be made in seconds, to working cross-industry with companies like National Air Traffic Control to find opportunities from others who are demonstrating good practice in this field.
They highlighted the challenge of embedding the mind-set and practice of User-Centred Design into the organisation and changing traditional ways of thinking that tend to start with system solutions, rather than the customer solutions. Company culture is key to this and enabling the business to work closely with designers and the technology teams is crucial. Roles like business-minded architects, engineers and psychologists were also seen as extremely valuable, more so than a single role of ¡¥Chief Design Officer¡¦ since it could be seen as authoritarian. They felt it would be better to weave the principles throughout the company and focus on training.
I did find it a little disappointing that the obvious choice of Apple was cited as the best example of User-Centred Design, and it made me think about companies I believe designs with the user in mind; people like tesco.com where they design for external customers in online grocery ordering through to purely dotcom grocery warehouses with handheld devices designed for staff ease of use and efficiency. Morgan Stanley are another good example where they are redesigning their trading systems with designers and User Experience Architects on the trading floor watching human behaviour and implementing efficiencies, with impressive financial results.
A question and answer session at the end raised points like, ¡¥what do you do when it goes wrong?¡¦ to which there was no clear response and a recruitment consultant was keen to know ¡¥how to find the right people?¡¦ and told us the challenges of recruiting the right people and getting these roles embedded within organizations.
It seems there¡¦s still a way to go for companies in making the shift to a fully
It did however, make me optimistic to see so many business and IT people attending an event at a business school in London and I feel encouraged that as more business-people and IT departments start to think in a user-centred way the potential is truly massive. It gives EMC the opportunity to help clients design technology solutions with customers at the centre.
Useful related reading:
The Inmates Are Running The Asylum by Alan Cooper
About Face 3: The Essentials of Interaction Design by Alan Cooper, Robert Reimann and David Cronin
Wrench in the System, Harold Hambrose
|Update your feed preferences|
The title of this post is a variant on the intriguing "infrastructure is code" meme from the #devops community. I think it’s a useful idea to remind ourselves of — especially as technology transitions.
Even though some of you reading this might thing the statement is blindingly obvious, it’s clear that the vast majority of people think of boxes with blinking lights when you say the word "storage".
And I think this is going to be up for change sooner than later.
Having now been directly involved in storage for over 15 years, I feel I can safely make a reasonable judgment when things are changing.
So let’s go look at the current landscape …
For starters, most storage hardware today is built out of the same industry-standard parts bin used by the server guys. Yes, there are a few storage stalwarts trying to claim differentiation through this bit or that bit of unique silicon, but the secular trend is pretty obvious — parts is parts.
Now, I think there’s still room for useful hardware differentiation in areas like innovative architecture, or clever packaging, or using the latest merchant silicon chips, or perhaps more reliable manufacturing processes.
All that being said, I think the opportunities for sustained differentiation through hardware prowess alone are becoming more rare over time.
And we all p lay in a very competitive market indeed Much like customers won’t accept dated or over-priced server hardware designs, they won’t accept dated or over-priced storage hardware designs.
Thinking About Storage Software
At its most basic level, you expect to write information to a storage platform, and get it back again.
You’d like to do so in a convenient format — more traditional block and file formats, perhaps something newer like objects, even maybe something like tables. That’s a function of software, not hardware.
You’d like the integrity of the data protected from all sorts of bad things that can happen — hardware failures, software failures, human error, the list goes on. That’s a function of software, not hardware.
You’d like to wring the maximum in performance and efficiency from the hardware you own: move the popular data to the high-perfrmance media, the less-popular data to cost-effective stuff, and wring the excess capacity out with things like compression and deduplication.
If you tend to think geographically, you’d like the right information in the right location at the right time if possible. Whether that’s to better protect, or improve user experience, or something else — that’s all software as well.
I could go on, but — when you think about it — just about everything we talk about that’s new, interesting and useful tends to boil dow n to a software discussion.
Sure, there are new hardware bits like faster processors, and enterprise flash drives, and newer 10GbE interconnects — but it takes software to make all that stuff really useful.
The Impact Of Open Source
Much like industry-standard components and architectures set the floor for cost-effective hardware, I think open source software sets the floor for cost-effective software functionality.
There’s still room to innovate in software, but you have to do it in areas that haven’t been well-covered by the open source community. And — make no mistake — it’s a safe bet that open source software will be an ever-increasing part of our enterprise environments.
Resistance to either trend appears futile
Separating Software From Hardware
We’ve just come to assume that storage software is inevitably woven to storage hardwa re. But as the industry moves to more standard components and architectures, that’s becoming more of a business model discussion, and less of a technology discussion.
Examples are starting abound, especially within EMC’s portfolio.
Our Atmos cloud storage platform is now available as a VMware virtual machine. Run it on just about any VMware-supported hardware platform, and you’ve got a fully featured, next-generation distributed object-oriented metadata-rich policy-driven cloud storage environment
One could separately debate the meri ts of running Atmos storage software on a generic hardware platform vs. one that is specifically built for purpose, but that’s more of a discussion around implementation choices — and choice is good.
Many of you are aware that the Avamar client-side dedupe backup platform works basically the same way — your backup target can either be a dedicated hardware device running Avamar, or the same functionality running in a VM on generic hardware — it’s your choice.
Going further, there’s a much larger universe of EMC storage products just waiting to escape the confines of phy sical hardware: RecoverPoint, VPLEX, Celerra, Centera — the list goes on.
Even some interesting open-source choices if you go looking: for example, the EMC LifeLine stack which powers the increasingly more powerful Iomega unified storage devices.
So why aren’t all these great things being done today? Lots of issues, but the big one is — it’s hard!
Making storage software work predictably and reliably in a virtual machine takes substantial engineering effort. And that incremental effort 0; has to be balanced against other investment opportunities: things like adding new features, or supporting new hardware, or perhaps deep integrations with other environments.
It’s happening — it’s just not an overnight process. Sorry to say, the future isn’t quite here yet …
Fast Forward Several Years
Imagine you’re in charge of storage decisions at your company, and you’re trying to put together a solution for part of your operation.
You might start by assembling a set of services you’ll need to provide for applications and users. You evaluate different software stack options. for functionality, price, reliability, support, ease-of-use, integration, APIs, etc.
You do so by composing various storage software VMs, and putting the resulting stacks through their paces. Basic presentation services (file, block, object, etc.). Some replication stuff, maybe some auto-tier ing and or intelligent archival stuff.
You test features and functions, integration points and management interfaces. All using virtual machines in whatever test bed you’ve got handy.
No need to consider storage as hardware just yet.
When you’re ready to implement, you’ve got more choices: you can stick with the storage-software-in-a-VM approach, or perhaps consider purpose-built hardware if your needs so dictate.
Functionality first, implementation second.
Farther Down The Line
The migration of storage functionality from hardware to software will likely change how storage hardware itself is built. At the low end of the market, all-in-one storage can learn new tricks simply by invoking new elements of a (presumably virtualized) software stack.
And at the high end of the market, it’s not hard to imagine larger, dynamic pools of virtualized storage capabilities that flex both resources and functionality much the way virtualized servers do today. To be fair, though, that’s a reasonable description of what a VMAX and VPLEX does today.
Indeed, w e can easily see storage software functionality running flexibly where it makes the most sense — on a general purpose all-in-one storage hardware platform, or perhaps as a set of virtualized tasks in a server farm, or perhaps on an appliance dedicated to a task — or any combination as needs shift.
The runaway success of VMware has caused many of us to think of "servers" in terms of software images that are invoked as needed. The hardware is still there, and it needs to do its job, but we think about it differently.
|Update your feed preferences|
URL: http://emcfeeds.emc.com/l?s=100003s2f6pa6831qks&r=rss2email&he=68747470253341253246253246636875636b73626c6f672e656d632e636f6d253246636875636b735f626c6f6725324632303130253246303725324673746f726167652d69732d736f6674776172652e68746d6c&i=70726f78793a687474703a2f2f636875636b73626c6f672e656d632e636f6d2f636875636b735f626c6f672f323031302f30372f73746f726167652d69732d736f6674776 172652e68746d6c
From aspirational to pragmatic:
EMC Unified Storage Is 20% More Efficient. Guaranteed.
That’s the tag line for the storage efficiency campaign we’ve recently launched in this hotly contested part of the market.
And, from all indications, it appears that it’s working quite well …
If you haven’t been following this particular drama closely, maybe I should bring you up to date.
This specific part of the storage market — dubbed "unified storage" (one storage platform that supports file and block protocols) is one of the most brutally competitive parts of the storage and larger IT landscape.
Smaller organizations use these storage arrays to run just about everything they’ve got. Larger organizations use these for non-mission-critical applications and general purpose storage. And some specific organizations occasionally put up vast amounts to support specific online services.
In this category, it’s hard to differentiate on performance, since — well — for many of the use cases good enough is good enough. Ditto for topics like availability and replication. And, even though there’s a ton of great software integration betwee n these arrays and environments like VMware and Microsoft, there’s only so much of that integration stuff you can use.
Which leaves us with the central topic of efficiency — who can use less raw storage capacity to get the job done? At the end of the day, everyone pays pretty much the same for component level inputs … it’s what you get out of it that matters.
Lots Of New Technology Here
Over the past few years, there’s been a lot of new approaches to drive storage efficiency, and they tend to show up in this segment first. Things like thin provisioning. Compression and deduplication. The use of enterprise flash drives to enable use of more low-cost storage devices, like SATA. Even spin-down and automigration to even lower-cost archives, whether they be internal to the organization or provided as an external service (e.g. cloud).
So much so, in fact, th at it’s very hard to sort through all the noise and fanfare around who’s more efficient. And, given the competitiveness of this segment, there’s an awful lot of noise indeed.
So we decided to make it easy for everyone.
The First Round Of Storage Guarantees
About a year or so ago, we all saw the first round of "efficiency guarantees" pop up in the market. Frankly speaking, I and many others saw them for what they were — basically, a cheap marketing gimmick.
Why? Although they offered up the appearance of considerable savings (e.g. up to 50% !!!) they had some fundamental flaws.
First, they were usually up against easy compares — to qualify, you had to switch between RAID 1 (mirroring) and parity RAID. That gets you 40%+ just there. Second, to get these results, frequently you had to use more exotic configurations that required turning off certain useful features, like snap reserves.
Second, when you went looking for details, there were all sorts of useful workloads excluded, like databases, or data objects that were already compressed.
Finally, there were multiple pages of terms and conditions, boatloads of exclusions and caveats, and a registration and acceptance process involved. All of the work to get any potential value had to be done by the customer.
Some of us thought we could do better, so we did.
A Better Guarantee?
EMC, in the normal course of our business, purchases and tests just about every decently competitive storage array in the marketplace. We put them in the lab, and run them through their paces.
Sometimes, it’s for interoperability and compatibility purposes. A lot of the EMC portfolio has to work well with storage arrays we don’t make. Other times, it’s to find out what’s really behind all the noisy claims that people make — we really want to know for ourselves.
And, in the course of doing all this, we were continually struck by one observation — many of these competitive storage devices weren’t all that efficient at converting raw storage capacity to usable capacity in a predictable and usable manner.
So we decided to do something about it …
The EMC Unified Storage Guarantee
We tried to make this as simple as possible.
Configure an EMC unified storage platform using our tools and standardized best practices.
Configure the other guy’s unified storage platform using their tools and standardized best practices, or use ours if you don’t have access to theirs.
Compare the raw capacities — if EMC doesn’t do the job with at least 20% less raw capacity, we’ll make up the difference.
No disclaimers, caveats, exceptions, legalese, registration processes, etc.
Simply put — no BS.
In addition to the program web page, there are a couple of cool promotional videos we’ve done (here and here), as well as Christopher Kusek’s blog (@cxi) where he’s having way too much fun with all of this. The backstory here is also fun: Chris worked for one of our competitors in this space for many years before recently joining EMC. There’s also a nice Facebook fan page if you’re so inclined.
You’ll see more of t his program in the future for one simple reason: it’s working.
How This Plays Out
Customers and partners of all sizes and shapes are taking us up on this offer.
It might be a modest 10TB filer through a partner, it might be a multi-petabyte transaction as a direct account — or anything in between. Again, as I said above, no exceptions and no BS.
The prospect of saving, say 200TB on a petabyte-sized config definitely gets a bit of attention
Customers are putting our configs up against the other guys, and they’re discovering what we’ve known all along — the other guys are pretty inefficient when it comes to converting raw capacity to usable stuff.
Most times, these people are seeing at least a 20% difference, maybe more. To be fair, there are a few exceptions where we came in a bit under the 20% mark, and EMC has quickly made good with more free capa city with no fuss whatsoever.
Are these customers using the 20% savings to spend less on storage? No.
Generally speaking, they’re using the savings to get an additional 20% of capacity from EMC.
Think about it: 20% more for your money from EMC.
And that’s a deal that many people are finding just too tempting to pass up.
What Lies Ahead?
As far as I can see, there’s no reason why we wouldn’t make this program a permanent fixture of our competitive offerings going forward.
The underlying basis for our storage efficiencies are architectural, and hard for our competitors to replicate. The program isn’t really costing us anything, since in most cases the 20% savings is already there, or more.
This could go on for a very long time indeed — there’s no reason to stop.
So, I have to ask — what are *you* going to do with your extra 20%?
|Update your feed preferences|
URL: http://emcfeeds.emc.com/l?s=100003s2f6pa6831qks&r=rss2email&he=68747470253341253246253246636875636b73626c6f672e656d632e636f6d253246636875636b735f626c6f67253246323031302532463037253246616e2d6f666665722d796f752d63616e742d7265667573652e68746d6c&i=70726f78793a687474703a2f2f636875636b73626c6f672e656d632e636f6d2f636875636b735f626c6f672f323031302f3037 2f616e2d6f666665722d796f752d63616e742d7265667573652e68746d6c