Storage Informer
Storage Informer

Tag: SATA

An Offer You Can’t Refuse

by on Jul.09, 2010, under Storage

An Offer You Can’t Refuse

EMC logo
From aspirational to pragmatic:

EMC Unified Storage Is 20% More Efficient.  Guaranteed.

That’s the tag line for the storage efficiency campaign we’ve recently launched in this hotly contested part of the market.

And, from all indications, it appears that it’s working quite well …

The Background

If you haven’t been following this particular drama closely, maybe I should bring you up to date.

This specific part of the storage market — dubbed "unified storage" (one storage platform that supports file and block protocols) is one of the most brutally competitive parts of the storage and larger IT landscape.

Smaller organizations use these storage arrays to run just about everything they’ve got.  Larger organizations use these for non-mission-critical applications and general purpose storage.  And some specific organizations occasionally put up vast amounts to support specific online services.

In this category, it’s hard to differentiate on performance, since — well — for many of the use cases good enough is good enough.  Ditto for topics like availability and replication.  And, even though there’s a ton of great software integration betwee n these arrays and environments like VMware and Microsoft, there’s only so much of that integration stuff you can use.

Which leaves us with the central topic of efficiencywho can use less raw storage capacity to get the job done?  At the end of the day, everyone pays pretty much the same for component level inputs … it’s what you get out of it that matters.

Lots Of New Technology Here

Over the past few years, there’s been a lot of new approaches to drive storage efficiency, and they tend to show up in this segment first.  Things like thin provisioning.  Compression and deduplication.  The use of enterprise flash drives to enable use of more low-cost storage devices, like SATA.  Even spin-down and automigration to even lower-cost archives, whether they be internal to the organization or provided as an external service (e.g. cloud).

So much so, in fact, th at it’s very hard to sort through all the noise and fanfare around who’s more efficient.  And, given the competitiveness of this segment, there’s an awful lot of noise indeed.

So we decided to make it easy for everyone.

The First Round Of Storage Guarantees

About a year or so ago, we all saw the first round of "efficiency guarantees" pop up in the market.  Frankly speaking, I and many others saw them for what they were — basically, a cheap marketing gimmick.

Why?  Although they offered up the appearance of considerable savings (e.g. up to 50% !!!) they had some fundamental flaws.

First, they were usually up against easy compares — to qualify, you had to switch between RAID 1 (mirroring) and parity RAID.  That gets you 40%+ just there.  Second, to get these results, frequently you had to use  more exotic configurations that required turning off certain useful features, like snap reserves. 

Yuck.

Second, when you went looking for details, there were all sorts of useful workloads excluded, like databases, or data objects that were already compressed. 

More yuck.

Finally, there were multiple pages of terms and conditions, boatloads of exclusions and caveats, and a registration and acceptance process involved.  All of the work to get any potential value had to be done by the customer. 

Maximum yuck.

Some of us thought we could do better, so we did.

A Better Guarantee?

EMC, in the normal course of our business, purchases and tests just about every decently competitive storage array in the marketplace.  We put them in the lab, and run them through their paces.

Sometimes, it’s for interoperability and compatibility purposes.  A lot of the EMC portfolio has to work well with storage arrays we don’t make.  Other times, it’s to find out what’s really behind all the noisy claims that people make — we really want to know for ourselves.

And, in the course of doing all this, we were continually struck by one observation — many of these competitive storage devices weren’t all that efficient at converting raw storage capacity to usable capacity in a predictable and usable manner.

So we decided to do something about it …

The EMC Unified Storage Guarantee

We tried to make this as simple as possible.

Configure an EMC unified storage platform using our tools and standardized best practices.

Configure the other guy’s unified storage platform using their tools and standardized best practices, or use ours if you don’t have access to theirs.

Compare the raw capacities — if EMC doesn’t do the job with at least 20% less raw capacity, we’ll make up the difference.

No disclaimers, caveats, exceptions, legalese, registration processes, etc. 

Simply put — no BS.

In addition to the program web page, there are a couple of cool promotional videos we’ve done (here and here), as well as Christopher Kusek’s blog (@cxi) where he’s having way too much fun with all of this. The backstory here is also fun: Chris worked for one of our competitors in this space for many years before recently joining EMC.  There’s also a nice Facebook fan page if you’re so inclined.

You’ll see more of t his program in the future for one simple reason: it’s working.

How This Plays Out

Customers and partners of all sizes and shapes are taking us up on this offer. 

It might be a modest 10TB filer through a partner, it might be a multi-petabyte transaction as a direct account — or anything in between.  Again, as I said above, no exceptions and no BS.

The prospect of saving, say 200TB on a petabyte-sized config definitely gets a bit of attention :-)

Customers are putting our configs up against the other guys, and they’re discovering what we’ve known all along — the other guys are pretty inefficient when it comes to converting raw capacity to usable stuff.

Most times, these people are seeing at least a 20% difference, maybe more.  To be fair, there are a few exceptions where we came in a bit under the 20% mark, and EMC has quickly made good with more free capa city with no fuss whatsoever.

Are these customers using the 20% savings to spend less on storage?  No.

Generally speaking, they’re using the savings to get an additional 20% of capacity from EMC.

Think about it: 20% more for your money from EMC.

And that’s a deal that many people are finding just too tempting to pass up.

What Lies Ahead?

As far as I can see, there’s no reason why we wouldn’t make this program a permanent fixture of our competitive offerings going forward.

The underlying basis for our storage efficiencies are architectural, and hard for our competitors to replicate.  The program isn’t really costing us anything, since in most cases the 20% savings is already there, or more. 

This could go on for a very long time indeed — there’s no reason to stop.

So, I have to ask — what are *you* going to do with your extra 20%?

:-)

Update your feed preferences

URL: http://emcfeeds.emc.com/l?s=100003s2f6pa6831qks&r=rss2email&he=68747470253341253246253246636875636b73626c6f672e656d632e636f6d253246636875636b735f626c6f67253246323031302532463037253246616e2d6f666665722d796f752d63616e742d7265667573652e68746d6c&i=70726f78793a687474703a2f2f636875636b73626c6f672e656d632e636f6d2f636875636b735f626c6f672f323031302f3037 2f616e2d6f666665722d796f752d63616e742d7265667573652e68746d6c

Leave a Comment :, , , , , , , , , , , , , , , , , , more...

The One Million IOPS game

by on Sep.26, 2009, under Storage

The One Million IOPS game

Few months back I saw an press release on Reuters from Fusion IO and HP claiming to hit 1 Million IOPS with a combination of Five 320GB ioDrives Duos and Six 160GB IO drives in an HP Proliant DL785 G5 which is a 4 Socket server with each socket having 4 cores, that makes a total of 16 cores in the server. I went saying wow that is amazing, a million IOPS is something any DBA running a high performance Database would like to get hands on. But when I did a quick search on the Internet for on how affordable the solution would be, I was horrified to see the cost which was clsoe enough to buy me couple of Mercedes E class sedan, all though the performance was stellar the cost and 2KB chunk size made me say which application does a 2KB read/write anyways, the default windows allocation is 4KB.

As time went by I got busy with other work till our Nand Storage Group  told us that they are coming up with a product concept based on PCIe to show a real 1 Million IOPS with 4KB block sizes which application in real world uses. This triggered the thought on what takes to achieve a 1 Million IOPS using generically available off-the shelf components.  I hit my lab desk to figure out what it takes.

Basically getting a Million IOPS depends on Three things: 1. Blazing fast Storage drives. 2. Server hardware with enough PCIe slots and good  processors.3. Host Bus Adapters capable of handling the significant number of IOPS Setup:   Intel Solid State Drives was my choice, there has been a lot discussed and written about the performance of Intel SSD’s and that was easy choice make. I selected Intel X25-M 160GB MLC drives made using 34nm process. These drives are rated for 35K Random 4KB read IOPS and seemed like a perfect fit for my testing. Then I started searching for the right Dual Socket server, this Intel® Server Systems SR2625URLX with 5 PCIe 2.0 x8 provided enough slots to connect HBA’s. The server was configured with Two Intel Xeon W5580 running at 3.2Ghz and 12GB of memory. Search for the HBA was ended when LSI showed their 9210-8i series (Code named as Falcon) which has  been rated to perform 300K IOPS. These are entry level HBA’s which can be configured to hook up up to Eight drives to eight Internal ports. Finally I had to house the SSD’s some where in a nice looking container, and a container was necessary to provide power connectivity to the drives. I zeored in on Super Micro 2U SuperChassis 216 SAS/SATA HD BAY, this came with Dual power supply and without any board inside it, but it provided me an option to simply plug in the drives to the panel and not worry about getting them powered. The other interesting thing about this Chassis is that, it comes with Six individual   connectors on the back plane so all each connector handles only Four drives, this is very different from active back planes which routes the signal across all the drives connected to them, this allowed me to just connect 4 drives per port on the HBA.  I also had to get a 4 slot disk enclosure ( Just some unnamed brand from local shop) in total I had capability to connect 28 drives. With all the hardware in place, I went ahead and installed Windows 2008 enterprise server edition and Iometer (Open source tool to test IO performance). 2 HBA’s were populated fully utilizing all 8 ports on them while other 3 HBA’s were just populated with 4 ports only.  The drives were left without a partition on them. Iometer was configured with two manager processes with 19 worker threads 11 on one Manager and 8 on the other. The 4KB Random reads were selected with Sector alignment set to 4KB. The IOmeter was set to fetch last update on the result screen.

Result: Once the test started with 24 drives, and felt I was short of few thousands to reach 1M IOPS so I had to find the 4 bay enclosure to connect another 4 more SSD’s taking the total number of SSD’s to 28. There was a Million sustained IOPS from the server with an average of 0.88 ms latency and 80-85% of CPU utilization.Conclusion: Recently we demonstrated this setup at Intel Developer Forum 2009 at San Francisco, this grabbed attention of many visitors due to the fact that this is something an ITNext Steps: I would be spending sometime to get this setup running with a RAID config and possibly use a real world application to drive the storage. This needs a lot of CPU resources and I have in mind one upcoming Platfrom from Intel which will let me do this. . I come up with followup experiments.-Bhaskar Gowda.

URL: http://communities.intel.com/community/openportit/server/blog/2009/09/26/the-one-million-iops-game

Leave a Comment :, , , , , , , , , , , , , , , , , , , , , , more...

No More Tiers

by on Jul.16, 2009, under Storage

No More Tiers

EMC logo No More Tiers

Professional smart guy Mike Dutch from the office of the CTO in EMC Storage Software Group (SSG) has an interesting post up on Backup And Beyond about Storage Tiers. How we think of them today and perhaps how we should…

Professional smart guy Mike Dutch from the office of the CTO in EMC Storage Software Group (SSG) has an interesting post up on Backup And Beyond about Storage Tiers. How we think of them today and perhaps how we should think of them going forward.

Here’s a snippet:

Should deduplicated storage be considered a storage tier?  I would say “no” and here’s why: because a technology such as deduplication can span, and optimize across all tiers.

A storage tier is storage space that has availability, performance, and cost characteristics different enough from other storage tiers as to economically justify the movement of data between it and other storage tiers based on the importance (value, performance need etc…) of the data. While storage tiers are often thought of as being tied to a particular type of hardware, e.g.  Flash, FC, SAS, SATA, VTL, PTL, COM (Computer Output Microfiche), or even paper, this is not necessarily the case. For example, highly available cloud or network-based virtual disks could leverage multiple technologies within their single tier.

You can get more at the source. And hopefully this won’t be Mike’s last foray into the store-o-sphere.

Update your feed preferences

URL: http://emcfeeds.emc.com/rsrc/link/_/no_more_tiers__997589513?f=84f8d580-01de-11de-22d1-00001a1a9134

Leave a Comment :, , , , , , , , , , , , , , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...