2011/09/06

Clusters of Development Virtual Machines

Since I'm working with clusters of virtual machines hosting my development environment on my local workstation, I've come up with method to simplify login, the remote-execution of commands and the exchange of data between the host and the virtual machine instances. This post will show you my way of keeping structure in all the virtual machine configurations, and a couple of shell-functions helping me using virtual machine instances efficiently.

The basic idea is to maintain a dedicated directory for each virtual machine and its related file, like the login key or the metadata for the configuration management. Following you can see a listing of one of the virtual machine directories:

$ ls /srv/vms/lxdev01.devops.org
chef_attributes.json  
chef_config.rb  
disk.img  
keys/  
libvirt_instance.xml  
provision.sh*  
ssh_config

As I have written before in my post "Bridging a Host-Internal Network for Virtual Machines", I like to use meaningful hostnames. To avoid confusion each of my development virtual machines is stored in an directory called like its FQDN. When I start to setup a new test virtual machine, I begin with copying one of my golden images created before (which was explained in my blog post "Installing KVM Virtual Machines"). Except of the disk image and the libvirt configuration, you can see a couple of other helper files for a virtual machine in the listing above. We will cover ssh_config and the keys/ directory in this post, and the Chef configuration management related files in future.

The following functions I'm keeping in an file called helpers.sh, which I source into my shell environment when needed, enables remote interaction with the virtual machine store in the current working directory:

# Login to the virtual machine, or execute a command, e.g.:
#
#    vmssh 'ls /tmp'
#
function vmssh() { ssh -qt -F $PWD/ssh_config instance $@; }
# Upload a file into the virtual machine, e.g.:
#
#    vmput /path/to/local/file /tmp
#
function vmput() { scp -F $PWD/ssh_config $1 instance:$2; }
# Download a file from the virtual machine, e.g.:
#
#    vmget /path/in/vm /tmp
#
function vmget() { scp -F $PWD/ssh_config instance:$1 $2; }
# Sync a locael directory to the virtual machine, e.g:
#
#   vmsync /local/path /tmp
#
function vmsync() { 
  rsync -va -L --exclude '.git' --exclude '.gitignore' \
        -e "ssh -F $PWD/ssh_config" $1 instance:$2;  
}

All the functions us the SSH configuration file ssh_config stored in the virtual machines directory, defining the login account name and the virtual machine IP-address:

Host instance
 User devops
 HostName 10.1.1.11
 IdentityFile /srv/vms/lxdev01.devops.gsi.de/keys/id_rsa
 UserKnownHostsFile /dev/null
 StrictHostKeyChecking no

Another ingredient in this file is the 'IdentityFile', the public part of an SSH-key-pair stored in the sub-directory keys/. You can create a new password-less SSH key and uploaded it into the virtual machine using the following commands:

$ mkdir keys; ssh-keygen -q -t rsa -b 2048 -N '' -f keys/id_rsa
$ vmssh 'mkdir -p -m 0700 $HOME/.ssh'
$ vmput keys/id_rsa.pub .ssh/authorized_keys

I think the vmssh, vmput and vmget functions are obvious. If the Rsync utility is installed in the virtual machine and on your host, you can sync entire directory structures between the host and the instance using vmsync.

2011/09/02

Installing KVM Virtual Machines

Many of my coworkers share the same basic virtual machine images with me, we call them golden images. These images should be very minimal installations of the operating system plus our default account and the configuration management client. In this post I will show how we build such disk-images.

To install a new virtual machine you will need an ISO CD/DVD image containing the Linux distribution of your choice. We do follow the rule to save these golden images to a dedicated directory /srv/images. These images should be never instantiated once they are installed. The folders holding particular virtual machine images are named according to the distribution name, version, and bitness. Furthermore We append a string describing the general purpose of the image, like being with graphical user interface, or providing a specific service. Examples are:

  • debian64-6.0.0-server
  • ubuntu64-10.04-desktop
  • debian64-6.0.2.1-chef-server-0.10.4

The following libvirt configuration (called libvirt_install.xml in this example) is used to start a virtual machine with an ISO image attached, which will be used to boot.

<domain type='kvm'>
  <name>debian-6.0.0-server</name>
  <memory>524288</memory>
  <vcpu>1</vcpu>
  <os>
    <type arch="x86_64">hvm</type>
    <boot dev='cdrom'/>
  </os>
  <clock sync="localtime"/>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <source file='/srv/images/debian64-6.0.0-server/disk.img'/>
      <target dev='hda'/>
      <driver name='qemu' type='qcow2'/>
    </disk>
    <interface type='bridge'>
      <source bridge='nbr0'/>
    </interface>
    <disk type='file' device='cdrom'>
      <source file='/srv/isos/debian-6.0.0-amd64-netinst.iso'/>
      <target dev='hdc'/>
      <readonly/>
    </disk>
    <graphics type='vnc' port='5905'/>
  </devices>
  <features>
    <acpi/>
  </features>
</domain>

You will need to adjust the source file locations of the virtual machine disk image and the ISO image. Before you can install the operating system you need to prepare a virtual machine disk image, which is in the case of Linux KVM created and initialized with the kvm-img command. (The parameter "40G" indicates the maximum size in GB the image can grow to, while being used.)

$ kvm-img create -f qcow2 disk.img 40G
$ virsh create libvirt_install.xml

Once the instance has started you need to connect a VNC client to the port 5905 as it was defined above with the graphics tag. While you follow the installation menu we propose to always create a minimal system configuration, which is the same across all golden images your create.

We do set the following configuration during installation:

  • Keymap: English
  • Host name is the distribution nick-name (e.g squeeze or lucid)
  • Domain name 'devops.org'
  • On big disk partition, no SWAP!
  • Username is 'devops'
  • Only standard system, no desktop environment (unless really needed), no services, no development environment, no editor, nothing, except a bootable Linux.

After the installation is finished, we elevate the "devops" user to be able to run commands as root via Sudo and we install the Chef configuration management system.

For Debian flavored Linux distributions this could look like:

$ echo "deb http://apt.opscode.com/ squeeze main" > /etc/apt/sources.list.d/opscode.list
$ wget -qO - http://apt.opscode.com/packages@opscode.com.gpg.key | sudo apt-key add -
$ apt-get update
$ apt-get install openssh-server sudo rsync chef
$ apt-get clean
$ groupadd admin
$ usermod -a -G admin devops

We added the following line to /etc/sudoers:

%admin ALL=NOPASSWD: ALL

When installation and final configuration is finished, shutdown the instance and don't touch it anymore, but clone new virtual machines from there.

You can compress the disk image:

$ kvm-img convert -c -f qcow2 -O qcow2 -o cluster_size=2M disk.img compressed.img
$ mv compressed.img disk.img

As a last step we will add a libvirt configuration used to start a virtual machine instance of this image. The golden image directory will contain the following files at the end:

  • The file containing the golden image disk.img.
  • The configuration libvirt_install.xml used to install the operating system, for later reference.
  • The configuration libvirt_instance.xml used to start a virtual machine. This file needs to be adjusted after the golden image was cloned.

The libvirt_instance.xml template looks like:

<domain type="kvm">
  <name>ADD FQDN HERE</name>
  <memory>524288</memory>
  <vcpu>1</vcpu>
  <os>
    <type arch="x86_64">hvm</type>
  </os>
  <clock sync="localtime"/>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type="file" device="disk">
      <source file="ADD PATH TO DISK IMAGE HERE"/>
      <target dev="hda"/>
      <driver name="qemu" type="qcow2"/>
    </disk>
    <interface type="bridge">
      <source bridge="nbr0"/>
      <mac address="ADD MAC ADDRESS HERE"/>
    </interface>
  </devices>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <features>
    <acpi/>
  </features>
</domain>

In a future post I will describe how to add an SSH key for password-less login to enable easy access to such images.

2011/09/01

Bridging a Host-Internal Network for Virtual Machines

As system administrator I like to build infrastructure as homogeneous as possible. The company I'm working for uses KVM both for services virtualisation and in its private Cloud beneath OpenNebula. In either use-case libvirt is deployed as interface for KVM. To make it easy to migrate from the local development environment to the production installation, I prefer to use KVM and libvirt also on my workstation.

Meanwhile I'm feeling confident to say that with very little effort I'm able to recreate complex service setups involving clusters of virtual machines. Beginning with this post I will write about my experience of using libvirt/KVM, Chef and a couple of shell scripts to bootstrap minimal copies of the production systems for developing configuration management code.

All my development environments are enclosed into a host-internal network shared by all virtual machines, connected to the external world using a NAT. Fortunately libvirt makes it very easy setup such a NATted network bridge. Following you can see my configuration file (called libvirt_nat_bridge.xml in this example):

<network>
  <name>nat_bridge</name>
  <bridge name="nbr0" />
  <forward mode="nat"/>
  <domain name="devops.org"/>
  <ip address="10.1.1.1" netmask="255.255.255.0">
    <dhcp>
      <range start="10.1.1.20" end="10.1.1.254" />
      <host mac="02:FF:0A:0A:06:02" ip="10.1.1.2" name="lxdns01.devops.org"/>
      <host mac="02:FF:0A:0A:06:03" ip="10.1.1.3" name="lxcm01.devops.org"/>
      <host mac="02:FF:0A:0A:06:04" ip="10.1.1.4" name="lxrm01.devops.org"/>
      <host mac="02:FF:0A:0A:06:05" ip="10.1.1.5" name="lxb001.devops.org"/>
      <host mac="02:FF:0A:0A:06:06" ip="10.1.1.6" name="lxb002.devops.org"/>
      <host mac="02:FF:0A:0A:06:07" ip="10.1.1.7" name="lxb003.devops.org"/>
      <host mac="02:FF:0A:0A:06:08" ip="10.1.1.8" name="lxb004.devops.org"/>
      <host mac="02:FF:0A:0A:06:09" ip="10.1.1.9" name="lxmon01.devops.org"/>
      <host mac="02:FF:0A:0A:06:0A" ip="10.1.1.10" name="lxcfs01.devops.org"/>
      <host mac="02:FF:0A:0A:06:0B" ip="10.1.1.11" name="lxdev01.devops.org"/>
      <host mac="02:FF:0A:0A:06:0C" ip="10.1.1.12" name="lxdev02.devops.org"/>
      <host mac="02:FF:0A:0A:06:0D" ip="10.1.1.13" name="lxdev03.devops.org"/>
    </dhcp>
  </ip> 
</network>

The network-description above tells libvirt to create a network bridge called nbr0. (This involves the configuration of iptables to act as NAT and to routing IP-traffic. Furthermore it starts a dnsmasq process serving DHCP and DNS resolution for the virtual machines.)

Setup such a network-configuration using virsh, the libvirt command-line-interface:

$ virsh net-create libvirt_nat_bridge.xml

I like to have a predefined set of host names for my test systems along with associated IPs, in order to always find the configuration management system (e.g. Chef) named "lxcm01" and a resource management system (like GridEngine or Condor) as "lxrm01".

When writing the libvirt configuration file for the virtual machine you can assign a specific IP pair using the associated MAC-address, like:

<devices>
...
  <interface type="bridge">
    <source bridge="nbr0"/>
    <mac address="02:FF:0A:0A:06:0B"/>
  </interface>
...
</devices>

Depending on the infrastructure you will be developing, it may be necessary to use a virtual machine to host a Bind instance to have all features of DNS. I will cover such a setup in anther post, as well as how to do port-forwarding to virtual machine instances.