Benefits of Virtualization

I’ll admit it. I was skeptical about the virtualization hype. I am not a big fan of adding overhead with no point- X on a server machine for example. The products I had played with were early on- Xen, Qemu, and other ones like that. They, frankly, sucked.

About half a year ago, though, I got a job at a consulting firm that used VMWare GSX virtualization extensively. I had a trial-by-fire and learned a lot, and during that time ESXi became free-as-in-beer so we moved to that. What I learned, from a big-picture point of view, has been enormously helpful.

Now that I am talking to a lot of companies about their server architecture, I am seeing a lot of my prior resistance to virtualization. These days, I think it is a pretty misguided viewpoint, so I thought I’d post the compelling benefits I have found to the virtualization approach.

Separation of function

It’s good practice to have separate servers, tailor-built, for different functions. That way, a reboot or some OS bug can only take down one service. Sadly, in the real world, the need for services usually vastly outstrips the budget for hardware and the available rack space, so this “best practice” becomes a “pipe dream” and someday a “legacy nightmare”. Needing to reboot a server creates a big problem when you have to plan an outage for several services at once. With virtualization, these problems largely disappear, as an OS update on a guest OS can happen independently of other services hosted on the same physical substrate. When the host OS gets updated, that causes a global outage, but you at least know that if the VMs come back up properly you don’t need to test every single service extensively.

Resource pooling

Single-service servers use capacity in different ways. My NFS server has RAM and CPU lying idle, my DNS server has RAM and disk IO lying idle, etc. This, combined with the aforementioned budget shortfalls, tends to lead to servers doing three or four services. With virtulization you can maintain the benefits of separate servers while also efficiently utilizing your server resources. Under ESXi this all happens very fluidly, and you can portion the resources with certain amounts reserved for individual virtual servers


Redundancy is always  good thing. Unfortunately, it usually means yet another box eating power and space. When you multiply the redundancy and the single-service servers, you very quickly get a lot of servers. Consider that for a little more in initial box outlay, one can host five or six virtual servers on a box. Two such boxes provides redundancy while simultaneously increasing computing density. That can condense half a rack into 2U! The benefits of putting multiple services on one server can be had, with the benefits of having single-service servers, all without the major problems of either approach.

Reboot speed

VMs don’t have to spin up hard drives or wait for other hardware to respond. Because of this, they can reboot VERY quickly. My average Ubuntu server can do a full reboot in under 3 minutes, typically. This is kind of a big deal to me, because anyt work that affects how services start or stop or involving the boot process at all is not done until a hands-off reboot has worked perfectly. All of this can be done through the control software on the virtualization host OS, which leads into the next point…

Look Ma, no hands!

A lot of the work I do is remote. I have not laid hands on a server keyboard in months. With virtualized OSes, anything I would have to physically do is usually handled through the software. I can reboot  VM and have full control of it, including inserting an ISO into its virtual CD-ROM drive and reinstalling the OS. Virtualization control software is effectively the KVM-over-IP solution every admin wants and every manager doesn’t want to buy. Because of the simplified OS running on the host machines, I rarely need to worry about their boot process.

Portability and Replication

Moving a virtual machine to another physical box is as easy as copying a file. If I have a piece of hardware that is failing,. I can shuffle all the virtual machines to another box while we take down the failing one and fix the hardware issue. You don’t have to worry about hardware compatibility, because the hardware the VM OS sees is precisely the same. With sufficient excess server capacity, this can make even RMA’ing an entire server pretty painless. This same procedure can be used to create a new machine. Cloning the virtual hard drive to a new VM and changing settings (hosts, hostname, ifconfig) is all that is required. If you wanted to get really fancy, you could set up DHCP such that even this small amount of work is reduced.


With sufficient excess drive space, one can take a snapshot of the OS at a given moment in time. One can then roll back to that snapshot or discard it. Ever installed a source package that messed up the whole server? Hit the “Revert to snapshot” button and it’s gone. Ever wanted to try ‘rm -rf /’? Feel free :) This same principle can be used with a “read-only” virtual hard drive. The OS can think it is writable, but all changes are discarded at reboot. From a security standpoint, this can be of a huge benefit outside your development and staging environments.


One of our clients uses identically numbered RFC-1918 space in separate VLANs such that you can effectively put a machine from testing to production by changing the VLAN the virtual NIC is in. This is a simple dropdown setting that can be changed while the machine is running. In our environement, we use one VLAN and /24 of RFC-1918 space per customer, with VPNs for their staff that only access shared resources and their virtual machines. This keeps them completely separate from other customers, even though they may share the same physical hardware.

The Downsides

Obviously, nothing good comes without a price. The main downsides I have seen are easily avoidable. The big ones are timekeeping (virtual processors act a little different than physical ones), resource exhaustion, and IO contention. Some of these problems are effectively masked from the underlying OS, so it can’t optimize disk access for example because it does not realize how overworked the hardware is. Some of those have gotten better- for example, the disk access layer in GSX had a lot of overhead, but most of that was resolved in ESXi. Also, a lot of pitfalls can be avoided by use of the integration packages(vmwaretools for example) that allow the guest OS to know it is virtualized and provide it some information from the virtualization host. These tools also give cool benefits to usability, like making the host OS able to initiate a graceful shutdown on the guest for example. There is also a learning curve, but it really is not too bad and the technology is interesting enough that it shouldn’t be a dealbreaker.

Obviously, my view has changed with regard to this technology. It is now not only viable, but truly compelling. It’s one of those technologies you never knew you needed until you started using it, like wifi or cell phones. These days, I see bare-metal installs and wonder how soon I can virtualize them.

December 6, 2008 • Tags: , , , , • Posted in: Linux, VMware

Leave a Reply