About Archive Tags RSS Feed

 

Where the hell can I get eyes like that?

9 December 2009 21:50

This week I've been mostly migrating guests from Xen to KVM. This has been a a pretty painless process, and I'm happy with the progress.

The migration process is basically:

  • Stop the Xen guest (domU).
  • Mount the filesystem (LVM-based) upon the Xen host (dom0).
  • Copy those mounted contents over to a new LVM location upon the KVM host using rsync.
  • Patch the filesystem once the rsync has been moved:
    • Create /dev nodes for the new root & swap devices.
    • Update /etc/fstab to use those devices.
  • Fiddle with routing to ensure traffic for the guest arrives at the KVM host, rather than the Xen host.
  • Hardwire static routes on the dom0 so that cross-guest traffic works correctly.
  • Boot up the new guest, and hope for the best.

The main delay in the migration comes from the rsync step which can take a while when there are a lot of small files involved. In the future I guess I should ask users to do this themselves in advance, or investigate the patches to rsync that let block devices be transferred - rather than filesystem contents.

Thankfully all of the guests I've moved thus far have worked successfully post-migration, and performance is good. (The KVM host is going to be saturated with I/O when the rsyncing of a new guest is carried out - so I expect performance to dip while that happens, but once everybody is moved it should otherwise perform well.)

So Xen vs. KVM? Its swings and roundabouts really. In terms of what I'm offering to users there isn't too much difference between them. The only significant change this time round is that I'll let users upload their own kernel and one brave soul has already done that!

ObFilm: Pitch Black

| 13 comments

 

Comments on this entry

icon Andre Luis Lopes at 16:26 on 9 December 2009

Hello Steve,

Am I right to assume the Xen guests (domU) you're migrating were previosuly created using xen-tools ?

If so, could you please tell us if at least some of these guests were using kernel initrd images hosted within the host (dom0) ?

I've some old Debian based Xen guests which were created using a very old xen-tools. These ones are using kernel initrd hosted outside the guest itself, inside the host (dom0) machine.

Have you got ny of these and, if so, have you got any non-common/strange issues while migrating them ?

icon toupeira at 16:32 on 9 December 2009

You could also create a read-only LVM snapshot before shutting down the domU, and rsync that one first.

icon Steve Kemp at 16:40 on 9 December 2009

Yes the guests I've been migrating were each previously created with xen-tools and had external kernels & initrds.

I've gotten rid of the initrds by launching the KVM guests with an external kernel which is compiled in a static fashion - so everything they need is included directly and there is thus no need for either modules on the guest, or an initrd.


icon tomás zerolo at 17:08 on 9 December 2009

Hi, Steve

one trick I like to use with rsync is to do one first rough sync while the system is running (limiting its bandwidth if necessary, see its option --bwlimit). This gives an image which isn't very accurate and even inconsistent, but a good approximation anyway. This one may take days, but since it doesn't interrupt normal operations -- who cares?

The "last rsync", on a stopped system is then usually quick.

icon Steve Kemp at 17:16 on 9 December 2009

Using snapshots is an excellent plan - I couldn't do that initially because I didn't have the spare space on the volume - but now I've moved a few guests off the host I could remove those LVM volumes and use that space to create a snapshot.

Otherwise yes, using rsync while the guests are running, from the inside, is a good plan. Definitely I'll do that next time - if I cannot use snapshots.

icon Stephen P. Schaefer at 17:22 on 9 December 2009

Have any of these guests been Solaris? I was recently unable to get Solaris to install under KVM, but it succeeded under Xen.

icon Steve Kemp at 17:41 on 9 December 2009

The guests in question have all been Debian/Ubuntu.

I had poor success getting (64-bit) OpenBSD running under KVM, have had Windows 2008 running succesfully, and have tried nothing else.

icon Christian at 18:28 on 9 December 2009

By chance, I've been doing the exact same thing this week - migrating from Xen-based virtualization to a KVM-based one.

Here are the basic steps I followed:

1) On my workstation, I created a simple but "perfect" KVM-based image. This is your basic lenny installation in expert mode with a 5GB virtio-based HDD. The only additional modifications were removing unwanted packages (tasksel etc), adding wanted packages (cfengine etc) and copying a skeleton cfengine configuration to the host.

2) Shutdown the services of the individual domUs, and perform a backup from the dom0. This is based on rsnapshot and is usually done automatically every 4 hours. It uses LVM snapshots of the invididual domU's HDDs; the backups are stored on my mini-SAN.

3) Shutdown the dom0. Install lenny, libvirt-bin, kvm, etc. Configure the entire machine with cfengine.

4) Copy the "perfect" template to an LVM volume on the new vm-host.

5) virt-clone all new guests from the template using LVM volumes.

6) One-by-one,

6a) start a new guest (they all run using pre-configured libvirt XML configs, sync'd to the vm-host by cfengine previously). Within the guests, set the hostname and edit fstab if needed (the raw template only provides 5GB, so some hosts get an additional LVM-based HDD for additional storage. My database guest, for example, has extra 20GB mounted on /var/lib/postgresql).

6b) Perform two successive cfengine runs on each guest. Stop all running services. Restore backup.

6c) Start the services again. Go back to 6a, and move on to the next guest.

Needless to say, there was some downtime involved - about 2 hours in total (20mins for the host, 2-10mins for the guests). I do feel that for the task performed, this was kind of short. I attribute this first to endless test runs (using KVM guests on my workstation), which left me with an impeccable cfengine configuration; second to a 200-item checklist I had prepared during those test runs.

On a side note: having great backups was obviously a plus. Just in case, I had 6hr-old copies of the Xen domUs on my mini-SAN, which is my new favorite box :-) It's an Atom-based host with 4× 1.5TB HDDs in a RAID5 config. I built it for a total of 450 EUR, and it maxes out at 50W (idle, it's about 30W; with spun-down disks over night, its about 20W).

icon BIll Boughton at 21:35 on 9 December 2009

I migrated from Xen a few months ago.

I used blocksync.py to migrate block devices between hosts.

First I made a LVM snapshot of the source device and blocksynced from that, having first installed a normal kernel grub on the guest.

Then I install grub on the target LV,
tested it, shutdown the source guest, ran the final
sync, and then booted the guest on the destination host.

On the destination i created the targets LVs
1 extent larger than the source, then I created a partition table with fdisk such that the destination was exactly the same size as the source, that allowed me to boot with grub
and not have to worry about maintaining guest kernels, or having to mounting potentially untrusted filesystems on the host. kpartx was used to create a device mapping for the new partition, this also makes installing grub from the host easier.

If you have the default extend size (4MiB) just begin the first partition at sector 8192 assuming a 512byte sector size.

The only problem I found was it upsets some old tools which expect partitions to begin on a cylinder boundary, however grub worked fine.

On another host the migration was done in place, to do this I modified the LVM configuration vgcfg{backup,restore} $EDITOR to add an extent to the beginning of an existing LV.

I could have created a new LV 1 extend larger, and copied all the data locally, but it would have taken much longer and was less interesting.

icon sam at 20:29 on 10 December 2009

Why is everyone moving away from xen? I just spent weeks reading and learning how to use xen and I now have several virtual machines running, all with complex network setups. I really don't want to have to migrate again... Is xen in trouble or something? Advantages/Disadvantages ?

icon Steve Kemp at 20:37 on 10 December 2009

There is probably no single over-riding reason why people are jumping ship, just little ones and a hive-mind issue to a certain extent.

Xen was very popular initially because it was fast and open - and that gave it both hype and exposure. But over time this advantage has faded as other people have got similar systems.

(e.g. kvm in the open world, vmware in the closed, and openvz in the not-real-virtualisation back-alley)

I think Xen would have remained popular had it actually made it into the mainline Linux kernel. But for various reasons that dragged and there was a horribly long time when the most recent version of Xen you could usefully use was based upon a 2.6.18 kernel - which was simply lacking hardware support to run on modern machines.

Now it seems that Xen is making progress in getting in the mainline, but once bitten .. people are shy. (My outsider view is that even now you're going to struggle with Xen on a modern machine and that the whole of xen support is not available in the mainline)

These days people expect virtualisation and the specifics of what drives it isn't so important - you can usually move between different systems with ease.

icon i5513 at 22:32 on 10 December 2009

Hi Steve,

Did you made any performance test between xen paravirtualized machines and kvm machines ?

I tell my friends about xen paravirtualized environment consume less resources than full-virtualized environments (like kvm and vmware). I don't known really if this is now true [1],2,3.

Thanks for your effort and thanks making bigger the comunity.

[1] http://www.linux-kvm.org/page/FAQ (diference between xen and kvm)
2 http://avikivity.blogspot.com/2008/04/paravirtualization-is-dead.html
3 http://lkml.org/lkml/2007/1/5/205

icon Robert McQueen at 12:57 on 13 December 2009

Just want to add another 1 to the blocksync.py approach - migrated a load of Bluelinux VMs between Xen host boxes (upgrading Xen versions as I went, etc) by syncing the block devices from outside using blocksync.py before stopping the guest, then only doing one very quick resync whilst the box was down.

http://www.bouncybouncy.net/ramblings/posts/xen_live_migration_without_shared_storage/

Note that I wasn't doing live migration as the post says as I was changing kernel, but interestingly the technique blocksync.py is using is the same as what Xen itself uses for live migration of the RAM, ie copying everything first, then stopping and copying everything that changed.

I really don't like mounting filesystems and rsyncing - its much slower and error-prone as I always cock up the rsync options and mangle permissions or acls or something, and it seems invasive to me mounting other people's filesystems. :)