Virtualization is a current IT trend. Many companies are “virtualizing their infrastructure” – what exactly does that mean?
A first time visitor to a data center will first notice the rows upon rows of computers. Big machines, little machines, in between machines but hundreds or thousands of machines! Most of the machines are in racks, some stand by themselves, but either way you experience a bewildering array of machines. The machines are physical – meaning that you could kick them…not a good idea, but you could physically harm them.
When the virtualization process starts, many of these machines are combined into one, single machine. The most common way to virtualize a machine is using a P2V process, which automates converting a Physical-To-Virtual (aka P2V) conversion. After the P2V, there is no longer a physical machine to kick, but logically the machine still exists and runs exactly as it did before except that now it runs on a virtual host instead of on the separate physical machine that it ran on before.
As part of the virtualization process some machines that aren’t computers are virtualized too. Networking equipment like routers, switches, and cables can have virtual counterparts. SAN storage (LUNs) can also be virtualized. Most of the physical equipment that has filled data centers can be virtualized, and in most cases virtual equipment is cheaper and easier to manage than its physical counterpart.
Virtualized hardware takes advantage of economies of scale. A single huge machine with lots of memory and processor power costs less than buying the same amount of computing power in a dozen or a hundred separate machines. Empowering one huge machine as a virtual host can then manage the same work as those dozens or hundreds of smaller machines, at a much lower purchase price.
A single virtual host is nearly always easier to maintain, uses less power and cooling, and usually requires fewer support personnel than the equivalent smaller machines do. These factors mean that the ongoing expenses associated with the virtual machine will be lower than the equivalent physical machines too.
As long as there isn’t a need for specialized hardware like interfaces to industrial equipment or advanced cryptography hardware, the virtual host can usually perform better than the individual machine that it replaces. The virtual host can be optimized with higher performance processors, memory, peripherals, etc. because the cost of those components can be spread across many virtual machines and because they are usually standard equipment on machines large enough to be virtual hosts.
Virtual hosts also make HA (high availability) and DR (disaster recovery) simpler too. These are two different techniques that are typically only implemented for keeping a critical physical machine or service available as much as possible.
Highly available means keeping a service or machine available for use as much as possible during normal operations. This implies techniques like load balancing, clustering, etc. These techniques won’t eliminate downtime when a failure occurs, but they will reduce that downtime from minutes to seconds or less. This functionality is a standard feature which is built into most virtual environments. While this feature takes some planning and preparation, it can be applied or removed from a virtual machine much more easily than similar features on physical machines.
Disaster recovery means getting a service or machine back online after a disaster occurs. Bringing a replacement virtual host online is expensive, but once the virtual host is available all of its virtual machines can come back online. This means that with good documentation, planning, and backups a DR can be handled faster and more effectively in a virtual environment than in a physical one.
Virtualization is an IT trend for a reason. Providing the ability for an organization to predict and manage space, human and financial resources, virtualization is an option which if you have not yet considered, you should.