Posts Tagged ‘ linux ’

Converting an EC2 image to a VMWare virtual machine

I recently had occasion to convert an Amazon EC2 AMI image into a VMWare virtual machine, for local testing. Much of the process can be learned online, and I may go into details about it later, since the information is scattered, but there was one particular step that stumped me for quite some time, that I would like to document for the future.

I was able to quite successfully unbundle the AMI, and ‘dd’ it into a partition inside a fresh VMWare VM. It could be mounted and played with, but would not boot, because it lacked a kernel. You see, Xen (which Amazon relies upon), unlike VMWare, does not emulate the full x86 boot process (at least in its standard configuration)- rather, the Xen host itself provides a paravirtualized kernel which is used to boot the Linux operating system stored within the image. Therefore, you will find most EC2 AMIs lack a kernel, since it is provided by the host (though they do contain a matching set of modules).

The solution to this seemed simple- mount the disk, chroot into it, then apt-get install a kernel package (the AMI was based on Ubuntu). However, apparently apt-get is.. a pretty elaborate system, and apparently relies on some things that would not normally be inside the chroot jail.

Specifically, it was trying very hard to work with pts’es, and complaining loudly (and subsequently segfaulting!) because /dev/pts was not valid within the jail. There are about a million pages on the internet that attempt to address this problem by doing something like:

mount -t devpts devpts /mnt/dev/pts

However, no amount of beating on this command with a stick would make it work for me. I tried also mounting /proc and /sys within the chroot, to no avail. I also tried to bring up udev within the jail, which appeared to work, except that it didn’t. It was really very unhappy with all of the techniques I was trying to get a valid /dev/pts tree (even though many of them appeared to work cosmetically).

Finally I stumbled across a working solution:

mount -o bind /proc /mnt/proc
mount -o bind /sys /mnt/sys
mount -o bind /dev /mnt/dev
mount -o bind /dev/pts /mnt/dev/pts

Aha! Instead of putzing around trying to get another copy of /dev/pts mounted, just use the bind option of mount to remount part of the filesystem in a second place. A clever hack, and it worked.

So, then I had a working copy of apt-get in the jail, and I could get on with fixing the other million things that go wrong when you convert an EC2 AMI to VMWare.

LVM and device mapper

So I ran into a small issue with LVM last night, and I thought I’d share my solution. I am by no means an expert on LVM, so whenever I encounter a problem, it is a learning experience.

Last night I moved around the drives connected to my file server, so that some of them were now connected to a different SCSI interface. Now, normally I filter the devices that LVM sees (through the filter directive in lvm.conf), so that there are no mishaps about which volumes LVM will let me touch. Because I have a lot of drives attached, and because Linux likes to rearrange the /dev/sdx names every other day, I use the nodes in /dev/disk/by-path to construct my filter.

Some of my drives are connected by USB, and some of them to SATA controllers. Previously, there was one USB and one SATA controller involved, so my filter directive looked like this:

filter= [
  "r/.*/" ]

However, I moved some of the drives connected to pci-0000:04:00.0 to a new controller that would live at pci-0000:05:00.0. Because of this change, I did not expect my LVM volumes to mount properly at boot- the filter should exclude those drives. I was surprised when I rebooted and my volumes mounted fine.

Now, this wasn’t actually a problem, since my volumes were mounting (which is the desired end goal), but I was concerned that my filter was not being respected. I ran through the various LVM ‘scan’ utilities (pvscan, vgscan, etc.) and they complained that they could not see all of the volumes, as I would have expected. So why were they still mounting?

As it turns out, it seems that the filter directive in lvm.conf only affects the lvm command-line utilities, but has no bearing on which LVM volumes the kernel device mapper actually sees. So, to restore my system to the state that I expected it to be in, I did the following:

  1. Unmount all of the LVM volumes.
  2. Issue `ls -l /dev/mapper`. Note that the unexpected logical volumes are still there.
  3. Issue `dmsetup remove_all`. This is the magic command that tells the device mapper to forget all device definitions that are no longer valid. In doing so, it appears that it requeries LVM, which at this point will read its configuration and take the filter into effect.
  4. Issue `ls -l /dev/mapper`. Note that the unexpected logical volumes have now been removed.

Now, I had the system in a consistent state, where there were no device mapper nodes referring to volumes that shouldn’t be visible. At this point, I of course wanted to get the system back into a state that was both consistent and working, so I did the following:

  1. Revise the filter definition in /etc/lvm/lvm.conf. My new filter definition looks like the following:
    filter= [
      "r/.*/" ]

    Essentially the same as before, but now with the disk controller at pc-0000:05:00.0 added.

  2. With the filter revised, how to recreate the device mapper nodes for the newly-valid volumes? Simple… Issue `vgchange -ay`. This tells LVM to make all volume groups (and their constituent logical volumes) active again. In doing so, the device definitions will be re-introduced to the device mapper, and the nodes will reappear in /dev/mapper.
  3. Issue `ls -l /dev/mapper`. Observe that the logical volumes have reappeared.
  4. Remount your volumes with `mount -a` or whatever is appropriate for your setup.

There you have it. I didn’t actually effect an obvious change by doing all of this- my volumes were mounted both before and after. However, I learned a bit about how the device mapper and LVM interact, and I learned about under what actual effect the filter directive in lvm.conf has. Now I feel more confident about using the LVM tools to do what I want.