Content-type: text/html Downes.ca ~ Stephen's Web ~ Virtualization and Desire2Learn

Stephen Downes

Knowledge, Learning, Community

Jul 21, 2008

Originally posted on Half an Hour, July 21, 2008.

Summary of a talk by D2L's Brett Emmerton at the D2L Fusion 2008 conference. The talk was quite technical, and so is this summary, but the concept of virtualization is one that should be understood by those seeking to know where computer environments are going in the future.

Virtualization

Each virtual machine is a complete system encapsulated in a set of software files. The purpose is to take advantage of unused cycles in servers. So you can run multiple 'machines' on top of a single server. The virtual machines run on top of what is called an 'ESX Server' which in turn runs on one or more physical servers. It allows you to share CPU cycles, shae memory, share local disks (eg. SAN (Storage-Area Network) based systems).

Some terms:
VM Host - is the physical server that VM Ware is installed in
VMs - are the virtual machines

We virtualize all layers - not just individual machines, but storage and network layers as well, using DRS (Dynamic Resource Sharing). The servers get moved around to the physical host depending on resource usage and availability.

Uses

1. VCB - VMWare Consolidated Backup - we capture a snapshot of the virtual machine and store it - it is essentially a machine, ready to roll. This snapshot basically clones the existing machine. But in order to take that snapshot, you have to have another Core and RAM ready to take the snapshot - so you have to watch your resource allocation. Also - snapshots can be 'left open' and it continues to write - people sometimes forget to close off snapshots, which will use a lot of storage.

2. Virtualization as a resource multiplier: even with peak load spikes, we are using less than 10 percent of the capacity of our application servers. So, instead, use (say) a 4-way server and run 32 virtual machines on it. Or to share memory - or to 'decay' unused processes that are occupying memory.

3. Interoperability. We can apply the VM hypervisor across all the layers - so it doesn't care whether it's an HP box, a Dell box, or a Sun box. You can order the boxes with the VMWare pre-installed - just tell them what your license is, and you can run your machine. Or., eg. we were able to deploy eight new application servers on a network in just a few minutes.

4. Resource Pools - aggegate collections of disparate hardware resources into unified logical resource pools, creating addressable agregate resourcing. This means, eg., that a failed server doesn't mean a faled application.

5. Add hardware dynamically. Provisioning is 'fire and forget'. You can easily add more capacity. You can also allow the VMWare to manage the load - dynamically balancing the load across the servers.

6. Policy Enforcement (was mmtioned in passing, not as a separate slide).

7. HA - High Availability - is an automatic restart engine. One server may be down, and unable to access its disk image. Other servers can see the disk image, though, and can restart the server based on policies set within the organization.

Preferred VM Configuration

1. As many CPU cores as possible - license VMWare by socket, which enables larger vSMP configurations and allows you to plan for VMotion compatibility (get a bunch of alike servers to prepare for this).

2 Maximize memory. Be careful, high density memory is very expensive - it's often cheaper to buy more servers with more memory. And more servers are better for redundancy anyways.

3. Fast storage means a wickedly fast server VM. You have to have high-speed storage. What am I excited about? Solid state disck.

4. 24-Hour+ Burn-in, because most failures occur in the first 90 days. memory is the part that fails the most. So you esnt to do a memory burn-in.

5. No single point of falure. Local recovery/failure is always preferred. Have someone - even a non-techie person - review your configuration (tell them what it is) and ask questions.
- NICs - two teams of two - so you have separate controllers
- separate admin and management and VM data traffic physically
- redum=ndant switch, network and storage layers
- redundant fans, power supplies (often people overlook the basic pieces)

6. We prefer fault tolerance to load balancing

7. Network load-balancing

8. Storage load balancing

Overconfiguaration of VMs

This is where I see things going wrong.

1. Physical to Virtual configuration (P2V) is efficient, etc., but you can run into problems with co-scheduling, and load optimization takes time.


Storage Configuration

- typically we use 10-20 VMs per LUN
- allocate the fastest disk and storage connections to the LUNS hosting your Virtual Machine DK files

Naming Standards

Apply naming standards to distinguish:
- SAN vs Local
- test vs production
- RAID type
- LUN ID
Have a central list, know why you're using it, know who owns it.

Other

- Make sure your CD-ROM isn't connected on Power-On if you don't need it
- Do not leave open snapshots on production machines
etc


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 13, 2024 01:13 a.m.

Canadian Flag Creative Commons License.

Force:yes