Posts Tagged ‘ lvm ’

LVM and device mapper

So I ran into a small issue with LVM last night, and I thought I’d share my solution. I am by no means an expert on LVM, so whenever I encounter a problem, it is a learning experience.

Last night I moved around the drives connected to my file server, so that some of them were now connected to a different SCSI interface. Now, normally I filter the devices that LVM sees (through the filter directive in lvm.conf), so that there are no mishaps about which volumes LVM will let me touch. Because I have a lot of drives attached, and because Linux likes to rearrange the /dev/sdx names every other day, I use the nodes in /dev/disk/by-path to construct my filter.

Some of my drives are connected by USB, and some of them to SATA controllers. Previously, there was one USB and one SATA controller involved, so my filter directive looked like this:

filter= [
  "r/.*/" ]

However, I moved some of the drives connected to pci-0000:04:00.0 to a new controller that would live at pci-0000:05:00.0. Because of this change, I did not expect my LVM volumes to mount properly at boot- the filter should exclude those drives. I was surprised when I rebooted and my volumes mounted fine.

Now, this wasn’t actually a problem, since my volumes were mounting (which is the desired end goal), but I was concerned that my filter was not being respected. I ran through the various LVM ‘scan’ utilities (pvscan, vgscan, etc.) and they complained that they could not see all of the volumes, as I would have expected. So why were they still mounting?

As it turns out, it seems that the filter directive in lvm.conf only affects the lvm command-line utilities, but has no bearing on which LVM volumes the kernel device mapper actually sees. So, to restore my system to the state that I expected it to be in, I did the following:

  1. Unmount all of the LVM volumes.
  2. Issue `ls -l /dev/mapper`. Note that the unexpected logical volumes are still there.
  3. Issue `dmsetup remove_all`. This is the magic command that tells the device mapper to forget all device definitions that are no longer valid. In doing so, it appears that it requeries LVM, which at this point will read its configuration and take the filter into effect.
  4. Issue `ls -l /dev/mapper`. Note that the unexpected logical volumes have now been removed.

Now, I had the system in a consistent state, where there were no device mapper nodes referring to volumes that shouldn’t be visible. At this point, I of course wanted to get the system back into a state that was both consistent and working, so I did the following:

  1. Revise the filter definition in /etc/lvm/lvm.conf. My new filter definition looks like the following:
    filter= [
      "r/.*/" ]

    Essentially the same as before, but now with the disk controller at pc-0000:05:00.0 added.

  2. With the filter revised, how to recreate the device mapper nodes for the newly-valid volumes? Simple… Issue `vgchange -ay`. This tells LVM to make all volume groups (and their constituent logical volumes) active again. In doing so, the device definitions will be re-introduced to the device mapper, and the nodes will reappear in /dev/mapper.
  3. Issue `ls -l /dev/mapper`. Observe that the logical volumes have reappeared.
  4. Remount your volumes with `mount -a` or whatever is appropriate for your setup.

There you have it. I didn’t actually effect an obvious change by doing all of this- my volumes were mounted both before and after. However, I learned a bit about how the device mapper and LVM interact, and I learned about under what actual effect the filter directive in lvm.conf has. Now I feel more confident about using the LVM tools to do what I want.