HomeRamblings  ⁄  SystemsServers

Overview of Xen Virtualization

Published: November 24, 2008 (almost 9 years ago)
Updated: over 2 years ago

Many people start looking at Virtualization Technologies because of the promise of doing more with less and high-availability solutions that the technology offers. However, there are many other benefits to virtualizing your data center, which are explored herein. Here, we will mainly focus on the free, open-source offering of Xen. This post closely follows a presentation I gave at the local Classic Hackers UGA Linux Users Group (CHUGALUG) in Athens, Georgia.

Xen Virtualization 2008
View SlideShare presentation or Upload your own. (tags: xen vmware)

The Main Players

VMWare is the most mature offering of the popular offerings. It was (one of) the first to fully virtualize the Windows Operating system and has been steadily fine-tuning and optimizing its implementation while expanding further into the Enterprise setting than all others. With it far more mature VMMotion for delivering high-availability features, VMWare currently (Q4 2008) has the leg up in terms of product offering. VMWare has a product for nearly every niche, from virtualization products for the Windows, Mac, and Linux desktops to a complete Enterprise Infrastructure offering comprehensive business continuity solutions along with the tools to migrate and convert your physical and virtual images up the platform chain. Xen is the leading Open Source virtualization solution available. XenSource was the leading firm providing sponsorship and stewardship of Xen and was recently acquired by Citrix, Inc., long a player in Thin-Application (terminal server) delivery of the Windows platform. Xen is variously offered and supported by Citrix, Oracle, IBM, SuSE, Sun, Red Hat, and numerous others. Microsoft Hyper-V (formerly Viridian) is, of course, Microsoft’s entry into the virtualization platform and will become much more prevalent after Windows 2008 Server editions launch. Hyper-V is still in beta at the time of this writing and expected to emerge from beta within six month of Windows 2008 General Availability (GA).

Virtualization Technology

There are two terms to get familiar with when you start assessing virtualization solutions. Those terms are “fully-virtualized” and “para-virtualized” (which Microsoft uses “enlighted Windows” in their Hyper-V documentation). A fully-virtualized OS image is one in which all hardware access is fully protected and/or emulated to the guest Operating System. This means an unmodified OS and kernel is running in the guest instance and the guest OS typically has no awareness that it is operating as a virtualized instance. A para-virtualized OS image is one in which a special kernel is deployed for the guest OS and this typically gives the guest OS direct access to many of the hardware components and drivers that the Host OS itself utilizes. These para-virtualized guests are “aware” of their virtualization state and can communicate and coordinate with the other running guest OS’s on the Host in order to safely share hardware resources.

CPU Technology

Both Intel and AMD have gotten into the virtualization game by offering new instruction sets on their respective CPUs that provide hardware assisted virtualization support. These hardware optimized instruction sets allow for very efficient hypervisor implementations that allow virtualization technologies to offload the oft-complex resource scheduling and sharing logic to the processors themselves. Those two technologies are called, respectively, Intel-VT (Virtualization Technology) and AMD-V (Virtualization). Because of VMWare’s approach to its hypervisor technology, VMWare traditionally makes the least use of these new instruction sets as it already claims to have the most highly optimized “trap and translate” and mature hardware management, allowing it to run fully-virtualized (unmodified) versions of guest OS’s. Xen and Hyper-V, on the other hand, are heavily reliant on these new instruction sets and consequentially have a much “thinner” hypervisor since neither has to get into the business of trapping those unsafe instructions and rewriting them (as VMWare does) to offer a safe and reliable side-by-side hosting of OS’s. The bottom line: VMWare does not require the special instruction sets while Xen and Hyper-V do. The one caveat to the above is that, because of some design decisions Intel made on memory management on their 64-bit instruction set, VMWare does require Intel-VT to host 64-bit guests, but is using only a small subset of the VT instruction set as it concerns memory, not specifically for managing safe access to hardware resources (so yes, it does get confusing!). Please see the table on slide #5 in the above presentation for a requirements matrix regarding these CPU instruction sets.

The benefits of Virtualizing

I believe most people tend to get interested in virtualization technology when they hear the pitches for the high-availability (HA) solutions that the technology makes almost trivially simple to implement. High-availability is a catch-all for describing a class of services that allow a guest OS to remain up and running with no apparent downtime should a physical host server go down. VMWare implements this with VMMotion while Citrix XenServer calls their solution “Live Migration.” However, these HA solutions require a considerable investment in licensing to deploy and may not be fully justifiable for all IT shops. There are a number of other benefits that a data center can gain from implementing a virtualization platform that can be considerably more valuable than High-availability:

Increase Hardware Utilization

Probably the chief advantage the most basic virtualization deployment can offer is simply increasing your CPU and RAM utilization. Virtualization chiefly takes advantage of the fact that most services on a server are not running at full CPU and RAM capacity 100% of of the time so its very possible to get a perceived 100% of bare-metal performance even in the face of 3 or 4 (or more) guest OS’s running on one physical machine. One of the most astounding discoveries I found in utilizing virtualization technologies is in implementing EnterpriseDB SQLGrid in a virtual environment. Grid SQL works pretty much as the name implies in that a SQL query is farmed out to all participating nodes in a grid and serviced in parallel with the results being compiled into final result by the master node on the grid. I found that the DBMS simply wasn’t taking full advantage of all the CPU cores and RAM when running on bare metal and decided to try running two or three slave nodes within guest OS’s on the same physical server. It turned out I could get nearly linear improvement in query result performance and much higher utilization of all CPU cores and available RAM.

Separate Service from Hardware

Another very beneficial avantage to virtualizing your operating systems, especially with Windows systems is that you can separate your OS install from the hardware. Our typical approach these days is to install Linux to the bare-metal and then install Windows into a virtual guest instance on our older hardware. By doing so, we’re getting near bare-metal performance out of our Windows systems on hardware beyond its reliable lifespan and when the hardware fails, we’re able to quickly bring up the entire OS instance on another server and continue along our merry way without having to get into re-installing, re-activating, and re-installing all the latest services patches (which, altogether now takes about 4 to 8 hours to get through all those mouse clicks and reboots). This approach does take you down an entirely new way of thinking about backups and restores. Instead of deploying expensive backup solutions such as Veritas and Backup-Exec and installing and maintaining agents and directory exception lists, we now use very simple tools such as rsync and gzip to backup full disk images with the server offline and then only backup essential files (such as the SQL Server database backups) on the live running. When we do service patches or change software settings, we again down the server and snapshot the disk image for backup (the live snapshots can probably do the same, but we haven’t truly explored this in our environment since its a relatively new feature to come along since establishing our standard operating procedures). We’re not a 24x7 critical shop and are able to establish service windows for maintenance, so this works very, very well for us in practice.

Business Continuity

The phenomenal thing we realized about server virtualization technologies and the simple backup and restore procedures discussed above is that we can carry our entire data center of over 60 servers around on one 500gb USB drive (or an offsite storage provider such as Amazon’s). Albeit scary from a security standpoint (you will definitely need some good security policies and checkpoints (such as backups being encrypted at rest)), but very liberating from a business continuity standpoint. One no longer has to fully invest in a second, continuously running data center and keeping near identical hardware in sync in both locations, not to mention paying for being “online” and maintaining a broadband connection for continuously transferring the files to the second data center. If you have a solid working relationship with your hardware vendor or even a fully-hosted ISP that offers very quick server provisioning in their distance data center, you can literally walk in with your Install CDs for the virtualization host servers, install the hosts on the server equipment, transfer the server images off the USB drives, fire up the guest OS’s, restore the database backups and other user files (such as the office’s common drive), change your public DNS entries and you’re back online for your entire data center in an entirely new data center location.

Try New Things Cost Effectively

The final big benefit gained from virtualization technologies is being able to quickly try out new things with new servers without actually having to have a bunch of spare machines sitting around. New servers can be provisioned in under an hour. New services can be tried, demoed and evaluated and then the provisioned servers disposed of (or further tuned, licensed, and promoted to “production”). Additionally, you can clone your production servers and painlessly perform “dry-run” upgrades and service patches on the instances and run full regression tests on the services all before ever actually performing the work on your live, production systems. In some cases, you can perform the upgrades on the cloned system in a sandbox environment, then move the new image over to production, down the live instance and up the new instance and be back online in a matter of minutes rather than having to take your services down for extending lengths of time.

Hardware Consideration

With the power in today’s CPUs and server hardware, its often difficult to determine where to invest to get the best bang for the buck. The following sub-components are listed in order of best ROI to least ROI:

  1. Sufficient RAM - to run many OS guests
  2. Fast Front-Side Bus (FSB) - to facilitate pushing the memory <--> CPU traffic around
  3. Fast Hard Drives and Controllers - to boost disk I/O to optimal levels
  4. Multi-core CPU’s - to boost number of guests on a single physical server
  5. CPU speed - To get tasks done in a reasonable response time
  6. Network bandwidth - To keep all guests responsive even when one is pushing a heavy network load In addition to the above basic hardware consideration, high-availability options require the following:
    • Dedicated iSCSI VLAN w/dedicated NICs
    • Line-speed, non-blocking switches
    • Host Bus Adaptors with TCP/IP offload Engines (TOE)
    • Wide-striped LUN’s
    • Load-balancing Storage Processors on your SAN
    • Clustered Filesystems such as Oracle Cluster File System (OCFS2) or Global File System (GFS)

Getting Started

Many have asked me how to best get started in exploring and deploying virtualized solutions. My recommendation is to simply start small and take small steps all along the way. You do not need to invest heavily and take a great leap of faith to begin reaping the benefits of virtualization. Building a performant virtualization environment requires experience and knowledge of what you’re working with and an extension of your sleuthing skills for finding performance bottlenecks and resolving those issues. Instead of simply solving one physical machine’s problem, you’re now faced with the task of resolving any one guest OS’s performance problems in light of any other guest OS that may be running on that box. So you will have to become adept at looking at both the big picture for the host server and simultaneously looking at the individual guests to see where the issues are coming from. Also, I would not recommend starting out virtualizing core infrastructure components (such as email, DNS, firewalls, etc.) or critical services, and especially I/O intensive services such as database servers until you have become adept at running a very stable and performant virtualization infrastructure. Instead, try to cut your virtual teeth on currently unstable services or servers that are verily on their last legs hardware-wise. After all, if you migrate an old Pentium IV 500mhz server into very poorly performing virtual host environment that is running 2ghz multi-core processors, you are still hard-pressed to be taking a step backwards performance-wise.

Conclusion

Virtualization technologies are changing the landscape of server and data management in the data centers. More services can be run on significantly less hardware and those services begin to become much more portable and deployable. Virtualization addresses many issues from business continuity and high-availability management to quickly provisioning and simplifying server configurations while significantly boosting utilization levels of the hardware itself.

Background

In preparing this post, I ran across a very good resources that give an in-depth background look at Xen and virtualization technologies in general. These are provided here:

comments powered by Disqus