LVM2-ThinPool Issue with activation on reboot
Axton
axton.grams at gmail.com
Tue Feb 16 04:39:49 UTC 2016
On Mon, Feb 15, 2016 at 2:03 PM, Serge Hallyn <serge.hallyn at ubuntu.com>
wrote:
> Quoting Axton (axton.grams at gmail.com):
> > I have an issue where lvm2 thinpool volume groups are not automatically
> > activated after reboot.
> >
> > Environmentals:
> > Ubuntu Server 15.10, minimal install
> > UEFI/Secure Boot in use
> > Ubuntu 15.10 (GNU/Linux 4.2.0-25-generic x86_64)
> >
> > root at cluster-02:~# cat /etc/os-release
> > NAME="Ubuntu"
> > VERSION="15.10 (Wily Werewolf)"
> > ID=ubuntu
> > ID_LIKE=debian
> > PRETTY_NAME="Ubuntu 15.10"
> > VERSION_ID="15.10"
> > HOME_URL="http://www.ubuntu.com/"
> > SUPPORT_URL="http://help.ubuntu.com/"
> > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
> >
> >
> > Here is the volume config before adding the new volume:
> >
> > root at cluster-02:~# lvs -a
> > LV VG Attr LSize Pool Origin Data% Meta% Move
> Log
> > Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> >
> > I add a new thinpool volume using this command:
> >
> > lvcreate -L 1T --type thin-pool --thinpool vgraid10/lvlxc
> >
> > root at cluster-02:~# lvcreate -L 1T --type thin-pool --thinpool
> vgraid10/lvlxc
> > Logical volume "lvlxc" created.
> >
> > Which results in this lvs:
> >
> > root at cluster-02:~# lvs -a
> > LV VG Attr LSize Pool Origin Data% Meta%
> > Move Log Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvlxc vgraid10 twi-a-tz-- 1.00t 0.00 0.42
> > [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
> > [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
> > [lvol0_pmspare] vgraid10 ewi------- 128.00m
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> >
> > I then create an unprivileged lxc container using the thinpool:
> >
> > root at cluster-02:~# lxc-create -B lvm --vgname=vgraid10 --thinpool=lvlxc
> -t
> > download -n tmpl-centos-7-unpriv --fssize 16GB -- -d centos -r 7 -a amd64
> > File descriptor 3 (/var/lib/lxc/tmpl-centos-7-unpriv/partial) leaked on
> > lvcreate invocation. Parent PID 9118: lxc-create
> > Logical volume "tmpl-centos-7-unpriv" created.
> > Using image from local cache
> > Unpacking the rootfs
> > ...
> >
> > The lvs output:
> > root at cluster-02:~# lvs -a
> > LV VG Attr LSize Pool Origin Data%
> > Meta% Move Log Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvlxc vgraid10 twi-aotz-- 1.00t 0.09
> 0.46
> > [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
> > [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
> > [lvol0_pmspare] vgraid10 ewi------- 128.00m
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> > tmpl-centos-7-unpriv vgraid10 Vwi-a-tz-- 16.00g lvlxc 5.94
> >
> >
> > Everything is ok at this point. Now, I will reboot the machine.
> >
> > root at cluster-02:~# lvs -a
> > LV VG Attr LSize Pool Origin Data%
> > Meta% Move Log Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvlxc vgraid10 twi---tz-- 1.00t
> > [lvlxc_tdata] vgraid10 Twi------- 1.00t
> > [lvlxc_tmeta] vgraid10 ewi------- 128.00m
> > [lvol0_pmspare] vgraid10 ewi------- 128.00m
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> > tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
> >
> >
> > At this point, the volume groups (thinpool and thin volume) are not
> > active. This causes issues and requires that I manually activate the
> > volumes:
> >
> > root at cluster-02:~# lvchange -ay vgraid10/lvlxc
> > root at cluster-02:~# lvs -a
> > LV VG Attr LSize Pool Origin Data%
> > Meta% Move Log Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvlxc vgraid10 twi-aotz-- 1.00t 0.09
> 0.46
> > [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
> > [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
> > [lvol0_pmspare] vgraid10 ewi------- 128.00m
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> > tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
> >
> > I have tried setting flags on the thinpool (vgraid10/lvlxc) as follows,
> to
> > no avail:
> >
> > root at cluster-02:~# lvchange -kn vgraid10/lvlxc
> > root at cluster-02:~# lvchange -ay vgraid10/lvlxc
> > root at cluster-02:~# lvchange -aye vgraid10/lvlxc
> >
> > Not sure what I'm missing here. I see this error in /var/log/syslog on
> > boot:
> >
> > Jan 29 22:47:51 cluster-02 systemd-udevd[485]: Process 'watershed sh -c
> > '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.
> >
> > I tried digging further into this, and it looks like this is generated
> from
> > the following udev file:
> >
> > root at cluster-02:~# cat /lib/udev/rules.d/85-lvm2.rules
> > # This file causes block devices with LVM signatures to be automatically
> > # added to their volume group.
> > # See udev(8) for syntax
> >
> > SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="lvm*|LVM*", \
> > RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a
> y'"
> >
> > It looks like exit code 127 means 'file not found.' I am guessing that
> it
> > is returning this for the watershed executable. I looked and did not see
> > an executable for watershed:
> >
> > root at cluster-02:~# find / -name watershed
> > /usr/share/doc/watershed
> > /usr/share/initramfs-tools/hooks/watershed
>
> Interesting. lvm2 Depends: on watershed, so it certainly should
> have been installed. You didn't show dpkg -l | grep watershed
> output, so I'm not sure whether the file got wrongly deleted on
> your host, or whether the package was wrongly not installed.
>
> > I reinstalled watershed:
> > root at cluster-02:~# apt-get install --reinstall watershed
> > ...
> > update-initramfs: deferring update (trigger activated)
> > Processing triggers for initramfs-tools (0.120ubuntu6) ...
> > update-initramfs: Generating /boot/initrd.img-4.2.0-25-generic
> > ...
> >
> > Now there is a watershed executable:
> >
> > root at cluster-02:~# find / -name watershed
> > /usr/share/doc/watershed
> > /usr/share/initramfs-tools/hooks/watershed
> > /lib/udev/watershed
> >
> > Same result after a reboot:
> > root at cluster-02:~# lvs
> > LV VG Attr LSize Pool Origin Data%
> > Meta% Move Log Cpy%Sync Convert
> > lvswap vgraid0 -wi-ao---- 29.80g
> > lvtmp vgraid0 -wi-ao---- 29.80g
> > lvvartmp vgraid0 -wi-ao---- 29.80g
> > lvhome vgraid10 -wi-ao---- 29.80g
> > lvlxc vgraid10 twi---tz-- 1.00t
> > lvroot vgraid10 -wi-ao---- 7.45g
> > lvusr vgraid10 -wi-ao---- 7.45g
> > lvvar vgraid10 -wi-ao---- 3.72g
> > lvvarcache vgraid10 -wi-ao---- 119.21g
> > lvvarlib vgraid10 -wi-ao---- 32.00g
> > lvvarlog vgraid10 -wi-ao---- 14.90g
> > tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
> >
> > Same error in /var/log/syslog:
> >
> > Jan 29 22:55:25 cluster-02 systemd-udevd[472]: Process 'watershed sh -c
> > '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.
> >
> > Could use some help.
>
> Maybe change the udev rule to strace that? Cc:d Ryan and
> Mike who have dealt more with thinpools than I have.
>
The system has since been destroyed. It was a clean/recent install with
updates. If my notes are accurate, I worked around this issue using these
steps:
Enable lvm thin support at boot:
root at cluster-01:~# vi /etc/initramfs-tools/hooks/thin-provisioning-tools
#!/bin/sh
PREREQ="lvm2"
prereqs()
{
echo ""
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
copy_exec /usr/sbin/thin_check
copy_exec /usr/sbin/thin_dump
copy_exec /usr/sbin/thin_repair
copy_exec /usr/sbin/thin_restore
copy_exec /sbin/dmeventd
copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2thin.so
copy_exec
/lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2.so.2.02
copy_exec
/lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2mirror.so
copy_exec
/lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2snapshot.so
copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2raid.so
copy_exec /lib/x86_64-linux-gnu/liblvm2cmd.so.2.02
manual_add_modules dm_thin_pool
root at cluster-01:~# chmod 755
/etc/initramfs-tools/hooks/thin-provisioning-tools
root at cluster-01:~# update-initramfs -u
Issues were sufficient with LXC on Ubuntu 15.10 that I gave up and moved on
to greener fields. I can't remember if it was this specific issue or if I
had issues creating unprivileged centos 7 containers.
The issue was consistently reproduce-able by:
1. Install Ubuntu 15.10 with only OpenSSH Server
2. Update the system
3. install linux-tools-generic apt-file thin-provisioning-tools
4. install lxc lockfile-progs
5. Create LVM Thinpool for containers
root at cluster-02:~# lvcreate -kn -L 1T -T vgraid10/lvlxc
root at cluster-02:~# lvchange -ay -kn -Ky vgraid10/lvlxc
6. Create a container (could also just create a thin volume)
root at cluster-01:~# lxc-create -B lvm --vgname=vgraid10 --thinpool=lvlxc -t
centos -n tmpl-centos-6 -f /etc/lxc/privileged.conf --fssize 16GB --
--release 6
7. Reboot
Axton Grams
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-server/attachments/20160215/fb7c17e1/attachment.html>
More information about the ubuntu-server
mailing list