X
Home & Office

Is a unified datacenter virtualization model practical?

Do we really want all datacenter hardware to be commodity items?
Written by David Chernicoff, Contributor

The vision of sitting down at a single console and provisioning, configuring, and deploying every component of your datacenter is an appealing one. The amount of time that is spent dealing with these issues across the different technologies in the datacenter is huge, and simplifying these tasks and consolidating them to the point where rolling out new application servers, or configuring new network resources is a simpler process that  can be rapidly effected, would reduce datacenter overhead significantly.

But while we talk about and implement different virtualization technologies on a regular basis, it's important to note that the underlying hardware technologies, though they get much less high profile attention, change fairly rapidly. And what worked best to virtualize resources this month, may not be the best technique next month.  A generic virtualization strategy is the equivalent of leaving money on the table; optimization of the virtualization for the underlying hardware is the only way to maximize the efficiency and value proposition of your virtualization strategy.

A recent example of this is the latest generation of Xeon processors. Because of the changes  such as PCiE 3.0, and the way that the network  interface changes with servers built with the new CPUs and motherboards you get a cascade effect across your datacenter that impacts not just servers, but servers,  networking, and storage, at the hardware connection level, as well as the management of all these devices.

Replacing existing servers with the new hardware and using your current virtualization strategy would allow your datacenter to continue operating, gaining whatever benefits can be derive in your environment from the basic improvements in CPU performance that the latest generation brought. But presuming that your networking infrastructure was capable of supporting it, that server network interface on your latest generation servers is now 10 GbE, rather than the 1 GbE of the server it replaced. This means that to maximize the performance of that server, your networking infrastructure has to be optimized to take advantage of that higher bandwidth interface.

So depending upon the physical architecture of your switch and how you are currently connecting servers, you may need to make physical changes to cabling. Even if you don't the virtual changes possible may be limited by the way your other servers and devices are currently connected and routed. So optimizing your new server connections to maximize their performance may require reconfiguring the connections of other devices, either physically or virtually.

And this process just cascades through your network, without even taking into consideration the other potential advantages of your new servers. A single server virtualization model would never allow you to get full optimization as your physical infrastructure changes.

As standards progress and more vendors adopt standardized virtualization models for specific devices, you would think that eventually you would be able to plug and play any device. But the nature of innovation is that if you commoditize everything; making the hardware completely commoditized,  with the virtualization layer and management being the focus of your interest, you also stifle innovation on the hardware side.  Commodity market innovations tend to be on recuing costs to improve profitability, not on delivering new capabilities to the buyer.

Editorial standards