Storage Informer
Storage Informer

Tag: DB2

EX

by on Sep.25, 2009, under Storage

Hardware-based Virtualization Built Into Next-Gen Nehalem-EX

Intel Senior Fellow Stephen Pawlowski delivered a session at this week’s Intel Developer Forum (IDF) on Intel’s latest industry-standard, mission-critical platform codenamed Nehalem-EX. And one of the key topics of discussion? Intel® Virtualization Technology (Intel® VT).

Nehalem-EX offers scalability along with world-record virtualization performance, enabling the highest consolidation ratios of any industry-standard server. And as IT departments across the board move to lower costs while increasing hardware utilization, Intel has responded to their needs by improving and enhancing its hardware-based virtualization technology.

With Nehalem-EX, Intel has created a feature that enables data packets to come in and be tagged to the appropriate virtual machine (VM). The hardware then places those packets into hardware queues that are focused on a particular Virtual Machine Manager (VMM). Once packets arrive, they are delivered to the appropriate VM in packet order and are re-packetized and put in the appropriate VM queue before they get sent to the virtual machine. It’s Intel’s hardware that is making virtualization software perform even better.

Including broad industry support for an era that is increasingly moving towards the cloud, virtualization technology combined with energy-efficient performance and RAS-rich environments provide a reliable, scalable environment that IT departments can bank on.

Download Stephen Pawlowski’s IDF session (PDF 1.94MB) Nehalem-EXStevePawlowski_IDF.pdf

Comments (0)

URL: http://feedproxy.google.com/~r/IntelBlogs/~3/d7edB274cck/hardware-based_virtualization.php

Leave a Comment :, , , , , , , , , , , , more...

Integrating Target Deduplication with Backup Applications

by on Jul.10, 2009, under Storage

Integrating Target Deduplication with Backup Applications

EMC logo Integrating Target Deduplication with Backup Applications

A bit of a Friday top 10 list here. What do you think is nirvana when it comes to integrating target deduplication devices with a backup application? What do you want? Reply with your own ideas if you like. Reach…

A bit of a Friday top 10 list here. What do you think is nirvana when it comes to integrating target deduplication devices with a backup application? What do you want?

Reply with your own ideas if you like. Reach as far as you want. I don’t care if it can’t be built is 6 months, I am interested in the ideal end state for this game. Tell me I am deluded if that suits your fancy.

What do you think?

Here is mine:

1) Complete catalog consistency with device replication.

2) Application initiated device replication (policy based initiation of replication activities).

3) Ability to “export” replicated catalog and data across master servers (so I can usefully replicate to a separate master server’s geography).

4) Ability to use appliance as a repository for source deduplicated data.

5) Complete set of filters for all backup data sources (Microsoft, Oracle, DB2, SourceOne, etc.)

6) Many to one, one to many, star and multi-hop replication–all directed by policy from your backup application.

7) Deduplicated target embedded storage node/media server code so that it can directly mount storage (LAN free backup).

8) Complete path to tape and export of deduplicated data to tape in deduplicated format (so that you would need 10 TB of tape to back up a 10 TB appliance).

9) Can be used as a target for LAN-free from Vsphere, with policy direction from the backup application.

10) Open support for target appliance as a target for NDMP from Celerra and NetApp–and BlueArc and anybody else who wants to join the fun.

Update your feed preferences

URL: http://emcfeeds.emc.com/rsrc/link/_/integrating_target_deduplication_with_backup_app_729225689?f=84f8d580-01de-11de-22d1-00001a1a9134

Leave a Comment :, , , , , , , , , , , , , more...

Scale-out of XenApp on ESX 3.5

by on Mar.31, 2009, under Storage

VROOM!: Scale-out of XenApp on ESX 3.5

In an earlier posting (Virtualizing XenApp on XenServer 5.0 and ESX 3.5) we looked at the performance of virtualizing a Citrix XenApp workload in a 2-vCPU VM in comparison to the native OS booted with two cores. This provided valuable data about the single-VM performance of XenApp running on ESX 3.5. In our next set of experiments we used the same workload, and the same hardware, but scaled out to 8 VMs. This is compared to the native OS booted with all 16 cores. We found that ESX has near-linear scaling as the number of VMs is increased, and that aggregate performance with 8 VMs is much better than native.

We expected the earlier single-VM approach to produce representative results because of the excellent scale-out performance of ESX 3.5. This is especially true on NUMA architectures where VMs running on different nodes are nearly independent in terms of CPU and memory resources. However, the same cannot be said for the scale-up performance (SMP scaling) of a single native machine, or a single VM. As for many other applications, virtualizing many relatively small XenApp servers on a single machine can overcome the inherent SMP performance limitations of XenApp on the same machine.

In the current experiments, each VM is the same as before, except the allocated memory is set to 6700 MB (the amount needed to run 30 users). Windows 2003 x64 was used in both the VMs and natively. See the above posting for more workload and configuration details. Shown below is the average aggregate latency as a function of the total number of users. Every data point shown is a separate run with about 4 hours of steady state execution. Each user performs six iterations where a complete set of 22 workload operations is performed during each iteration. The latency of these operations is summed to get the aggregate latency. The average is over the middle four iterations, all the users, and all the VMs.

Blog_multivm_esx_native

In both the Native and ESX cases all 16 cores are being used (although with much less than 100% utilization). At very low load Native has somewhat better total latency, but beyond 80 users the latency quickly degrades. Starting at 140 users some of the sessions start to fail. 120 users is really the upper limit for running this workload on Native. With 8 VMs on ESX, 20 users per VM (160 total) was not a problem at all, so we pushed the load up to 240 total users. At this point the latency is getting high, but there were no failures and all of the desktop applications were still usable. The load has to be increased to more than 200 users on ESX before the latency exceeds that from 120 users on Native. That is, for a Quality-of-Service standard of 39 seconds aggregate latency, ESX supports 67% more users than Native. Like many commonly-deployed applications, XenApp has limited SMP scalability. Roughly speaking, its scalability is better than common web apps but not as good as well-tuned databases. When running this workload, XenApp scales well to 4 CPUs, but 8 CPUs is marginal and 16 CPUs is clearly too many. Dividing the load among smaller VMs avoids SMP scaling issues and allows the full capabilities of the hardware to be utilized.

Some would say that even 200 XenApp users are not very many for such a powerful machine. In any benchmark of this kind many decisions have to be made with regard to the choice of applications, operations within each application, and amount of ¡§user think time¡¨ between operations. As we pointed out earlier, we strove to make realistic choices when designing the VDI workload. However, one may choose to model users performing less resource-intensive operations and thus be able to support more of them.

The scale-out performance of ESX is quantified in the second chart, which shows the total latency as a function of the number of VMs with all VMs running either a low (10 users), medium (20), or high (30) load each. Flat lines would indicate perfect scalability. They are actually nearly so for the each of the load cases up to 4 VMs. The latency increases noticeably only for 8 VMs, and then only for higher loads. This indicates that the increased application latency is mostly due to the increased memory latency caused by running 2 VMs per NUMA node (as opposed to at most a single VM per node for four or fewer VMs).
Blog_multivm_scale_out

While our first blog showed how low the overhead is for running XenApp on a single 2-vCPU VM on ESX compared to a native OS booted with two CPUs, the current results fully utilizing the 16 core machine are even more compelling. These show the excellent scale-out performance of ESX on a modern NUMA machine, and that the aggregate performance of several VMs can far exceed the capabilities of a single native OS.

URL: http://blogs.vmware.com/performance/2009/03/scaleout-of-xenapp-on-esx-35.html

Leave a Comment :, , , , , , , , , , , , , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...