Storage Informer
Storage Informer

Tag: SSD

The One Million IOPS game

by on Sep.26, 2009, under Storage

The One Million IOPS game

Few months back I saw an press release on Reuters from Fusion IO and HP claiming to hit 1 Million IOPS with a combination of Five 320GB ioDrives Duos and Six 160GB IO drives in an HP Proliant DL785 G5 which is a 4 Socket server with each socket having 4 cores, that makes a total of 16 cores in the server. I went saying wow that is amazing, a million IOPS is something any DBA running a high performance Database would like to get hands on. But when I did a quick search on the Internet for on how affordable the solution would be, I was horrified to see the cost which was clsoe enough to buy me couple of Mercedes E class sedan, all though the performance was stellar the cost and 2KB chunk size made me say which application does a 2KB read/write anyways, the default windows allocation is 4KB.

As time went by I got busy with other work till our Nand Storage Group  told us that they are coming up with a product concept based on PCIe to show a real 1 Million IOPS with 4KB block sizes which application in real world uses. This triggered the thought on what takes to achieve a 1 Million IOPS using generically available off-the shelf components.  I hit my lab desk to figure out what it takes.

Basically getting a Million IOPS depends on Three things: 1. Blazing fast Storage drives. 2. Server hardware with enough PCIe slots and good  processors.3. Host Bus Adapters capable of handling the significant number of IOPS Setup:   Intel Solid State Drives was my choice, there has been a lot discussed and written about the performance of Intel SSD’s and that was easy choice make. I selected Intel X25-M 160GB MLC drives made using 34nm process. These drives are rated for 35K Random 4KB read IOPS and seemed like a perfect fit for my testing. Then I started searching for the right Dual Socket server, this Intel® Server Systems SR2625URLX with 5 PCIe 2.0 x8 provided enough slots to connect HBA’s. The server was configured with Two Intel Xeon W5580 running at 3.2Ghz and 12GB of memory. Search for the HBA was ended when LSI showed their 9210-8i series (Code named as Falcon) which has  been rated to perform 300K IOPS. These are entry level HBA’s which can be configured to hook up up to Eight drives to eight Internal ports. Finally I had to house the SSD’s some where in a nice looking container, and a container was necessary to provide power connectivity to the drives. I zeored in on Super Micro 2U SuperChassis 216 SAS/SATA HD BAY, this came with Dual power supply and without any board inside it, but it provided me an option to simply plug in the drives to the panel and not worry about getting them powered. The other interesting thing about this Chassis is that, it comes with Six individual   connectors on the back plane so all each connector handles only Four drives, this is very different from active back planes which routes the signal across all the drives connected to them, this allowed me to just connect 4 drives per port on the HBA.  I also had to get a 4 slot disk enclosure ( Just some unnamed brand from local shop) in total I had capability to connect 28 drives. With all the hardware in place, I went ahead and installed Windows 2008 enterprise server edition and Iometer (Open source tool to test IO performance). 2 HBA’s were populated fully utilizing all 8 ports on them while other 3 HBA’s were just populated with 4 ports only.  The drives were left without a partition on them. Iometer was configured with two manager processes with 19 worker threads 11 on one Manager and 8 on the other. The 4KB Random reads were selected with Sector alignment set to 4KB. The IOmeter was set to fetch last update on the result screen.

Result: Once the test started with 24 drives, and felt I was short of few thousands to reach 1M IOPS so I had to find the 4 bay enclosure to connect another 4 more SSD’s taking the total number of SSD’s to 28. There was a Million sustained IOPS from the server with an average of 0.88 ms latency and 80-85% of CPU utilization.Conclusion: Recently we demonstrated this setup at Intel Developer Forum 2009 at San Francisco, this grabbed attention of many visitors due to the fact that this is something an ITNext Steps: I would be spending sometime to get this setup running with a RAID config and possibly use a real world application to drive the storage. This needs a lot of CPU resources and I have in mind one upcoming Platfrom from Intel which will let me do this. . I come up with followup experiments.-Bhaskar Gowda.


Leave a Comment :, , , , , , , , , , , , , , , , , , , , , , more...

VMworld 2009 – Day 2, keynote by Steve Herrod

by on Sep.02, 2009, under Storage

VMworld 2009 V Day 2, keynote by Steve Herrod



Welcome to the liveblogging of the VMworld 2009 keynote, day 2.

Today’s keynote is expected to bring us a bit more technical content than yesterday. The main speaker is Dr. Stephen Herrod, VMware CTO and one of the original developers of SimOS while at Stanford. VMware engineers are expected to be joining Herrod on stage to demonstrate upcoming features and products.

Note: this is a rush transcript. Check back later for analysis.

Ready for the VMworld 2009 keynote by VMware CTO Stephen Herrod. Follow for the play-by-play.
The keynote is starting..
“Steve Herrod takes the stage. “”This is probably my favourite hour of the year.””
Focus today will be on the desktop, PCoIP, .. View is about enabling a new sort of desktop concept. User-centric rather then device centric.
+1M virtual desktops out there.
Lots of development in vSphere was geared towards hosting desktops. Tuned for Vista and Win 7 workloads.
Intel Nehalem / Xeon 5500 allows up to twice the amount of desktops per cores.
New evolutions on storage level as well. SSD disks allow much better user experiences.
The goal is to provide best user experience to all endpoints: thin, thick clients, over WAN, LAN, …
The best experience is still the local, rich, portable desktop. The same image that’s used on other endpoints can be checked in and out.
Offline replication of desktops so to speak.
PCoIP, purpose built for desktop virtualization. Development with Terradici. Coming later this year.
Vendors can create hardware accellerated clients for even better experience.
“New concept “”in this economy””: employee owned IT. Provide user with a certain budget, let them choose the hardware (Mac/PC/..) and host the”
corporate desktop through Fusion/Workstation/Player/…
For corporate-owned IT: bare-metal virtualization. Co-development with Intel. Laptop/deskop hypervisor.
Single hardware-agnostic desktop build can run on any type of x86 device.
“VMware View Demo on stage. “”A day in the life of a VMware View user.””
Windows 7 running on laptop. Showing device manager. The rich environment is running on a virtual SVGA gpu -> type 1 hypervisor.
Next demo: thin client with smooth graphics. Demo with Google Earth.
“Opening the same VM on his “”beige box”” desktop. Next: Demo with WYSE pocket client on iPhone to access the same VM.”
Next: VMware Mobile strategy. Not only use the phone as a thin client, but put a hypervisor on the phone as well.
Phones become mobile pc’s, bring along the traditional pc challenges. (security, access control, data leakage protection, ..)
Creating corporate mobile vm’s allows the same device freedom desktop virtualization allows.
Lots of app stores coming up, tied to specific types of devices, even though they’ve got the same base platform.
(As if Apple is ever going to approve a hypervisor app that can run arbitrary code…)
Peter Ciurea, Global Head of Product Development at Visa comes on stage.
Demo with development ARM-based phone. Hypervisor + Windows CE on top.
Visa app demo with alerts (every time a Visa card is swiped, the device alerts you), offers (mobile coupons, ..) and location based offers.
The Visa app was developed for Android. Android is running next to Windows CE. The OS has access to the gps functions, has smooth graphics.
Moving back to ‘big iron’. vSphere enables the software mainframe. Up to 32TB of RAM.
If you were born before ‘75 you’d call this a mainframe. If you were born after, you’d call it the Cloud. Let’s call it a giant computer.
VMotion is the foundation of this giant computer.
“This is about the 6th anniversary of the first VMotion demo. Music: “”I like to move it”” from Madagascar. You’ll never VMotion again without”
thinking about that song.
VMotion was around when Friends was still doing new episodes. The first VMworld was 10 months away. Loooong time ago.
Probably about 359 million VMotions to date. People use VMotion constantly. Saves time, money and marriages
VMotion breadth continues to grow. Storage VMotion, Network VMotion (distributed switch) and long distance VMotion.
Focus of vSphere: support all major applications (SAP, Oracle, SQL, …). Even HPC workloads are moving to virtualization.
vSphere allows better-than-physical scalability.
DRS allows higher peak capacity. Achieves 96% efficiency of ideally placed VMs. Extending DRS to include I/O.
“DPM: VMotion for global power optimization. (””Constantly defragging the datacenter””)”
Automating the datacenter through VMware AppSpeed allows better application performance guarantees.
Built into vSphere: vApp. Built on OVF. Create collections of self-wiring VMs and add SLA metadata.
Control security and compliance via VMsafe APIs. Virtualization lets you look into VMs and check every instruction.
Always-on security, users can not turn agents off.
Last thing: manage configuration integrity at scale. vCenter ConfigControl
First public demo on stage of ConfigControl. Dashboard to monitor and manage changes in IT environments.
Scenario: help desk ticket, Exchange server is down. ConfigControl sent alert that network policy was changed. Helps debug system problems.
ConfigControl interface is web-based.
ConfigControl will ship in 2010 H1.
A bit more detail of the VMworld datacenter. Running 37,248 VMs. From 25 Megawatts to 540 Kilowatts of power usage. 778 physical servers.
“For the geeks out there who didn’t know it: vSphere can host vSphere. The labs run nested ESX “”Virtual Virtual datacenters””.”
Next chapter: cloud cloud cloud.
Work being done around network connectivity between clouds, transparent storage migration, easy management and policy-enforcement.
Connectivity example: work started in Site Recovery Manager.
Policies are exchanged between multiple datacenters. IP address management is automated through vCenter.
Cisco enables long distance VMotion through Data Center Interconnect.
Datacenter extension up to 200km.
F5: BIG-IP Global Traffic manager enable slong distance VMotion.
Moving on to vCloud API to provide programmatic access to resources. Enables self-service portals, vCenter client plugins (manage VMs in
3rd party datacenters), and ISV integrations: 3rd party management, SaaS deployments, ..
vCloud API was submitted to DMTF for industry certification.
End goal: differentiated cloud offerings. (Demo cloud, High end SLA, green cloud, high performance, …)
Fourth pillar (next to View, vSphere, vCloud): vApps. Auto-pilot applications. Reason for SpringSource applications.
Automating the application development space will bring down maintenance and deployment costs.
Virtualizing hardware simplifies deployment. VMware wants to simplify development through application frameworks that work together with VMs
Virtualization-aware platform can create self-healing and self-scaling applications, that interact with hypervisor to manage hardware
building bocks.
Goal is to interface with not only Java frameworks, but also with RoR, Django, .net, PHP, …
Adrian Colyer, SpringSource CTO on stage.
Another try to explain what this all does to an audience of mainly server infrastructure folks.
And that finishes the keynote…
Thanks for following. Check back later for more news and analysis.

Follow us on Twitter. Backstage news: @lode – Breaking news/keynotes: @vmlive


Leave a Comment :, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , more...

Intel Solid State Drives – SOLID!

by on Sep.02, 2009, under Storage

Intel Solid State Drives – SOLID!

This video gives a whole new meaning to ‘durable computing’…

and for those of you who don’t like math… 70C is 158F (that’s HOT!)

You can read up on SSD reviews at: Anandtech, Tom’s Hardware Guide, Legit Reviews, PC Perspective, and more!


Leave a Comment :, , , , , , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...