Today Ubuntu Server 14.04 was released and I downloaded the ISO to work with the latest from Canonical. I've used Citrix XenSever for several years on servers that I manage, but I'm teaching myself how to work with OpenStack and other "cloud" and virtualization technologies. I have a free server so I'm starting with a Xen (Xen Project) install.

Installing Xen (Xen Project) Hypervisor

I installed Xen: apt-get install xen-hypervisor-4.4

Then I double-checked /boot/grub/grub.cfg to verify the label given to the boot option for Xen. It was still "Ubuntu GNU/Linux, with Xen hypervisor", so I changed the grub default to use Xen in /etc/default/grub:

GRUB_DEFAULT="Ubuntu GNU/Linux, with Xen hypervisor"

Saved that file and ran: update-grub

Got this warning:

Including Xen overrides from /etc/default/grub.d/xen.cfg
WARNING: GRUB_DEFAULT changed to boot into Xen by default!
         Edit /etc/default/grub.d/xen.cfg to avoid this warning.

Nice! So I was ready to reboot into the Xen kernel:shutdown -r now

Once the server came back up, I ran xl list as root to check if Xen was ready to go.

Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0 16066     8     r-----      17.2

Dom0 was set up, so that was good!

Setting up the Networking

I needed to reconfigure networking so I could bridge Virtual Machines (VMs) with the physical network of the host. I edited /etc/network/interfaces to show:

auto lo
iface lo inet loopback

auto xenbr0
iface xenbr0 inet static
        bridge-ports eth0
        address 10.1.1.66
        netmask 255.255.255.192
        network 10.1.1.64
        broadcast 10.1.1.127
        gateway 10.1.1.65
        dns-nameservers 8.8.4.4 8.8.8.8

# The primary network interface
#auto eth0
iface eth0 inet manual

Setting up VM Storage

Storage for the VMs can be setup to use a file, or a block device that will hold the entire file system. Using block devices allows for faster IO, so I used LVM instead of a disk image. (see http://wiki.xen.org/wiki/Storage_options)

I already had a LVM volume group set up from the initial server install, so vgdisplay gave me the name of the volume group, "Host-A-vg". (If I didn't already have a volume group, I would have had to use pvcreate and vgcreate to set it up.)