LVM2-ThinPool Issue with activation on reboot
Axton
axton.grams at gmail.com
Sat Jan 30 04:57:35 UTC 2016
I have an issue where lvm2 thinpool volume groups are not automatically
activated after reboot.
Environmentals:
Ubuntu Server 15.10, minimal install
UEFI/Secure Boot in use
Ubuntu 15.10 (GNU/Linux 4.2.0-25-generic x86_64)
root at cluster-02:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="15.10 (Wily Werewolf)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 15.10"
VERSION_ID="15.10"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
Here is the volume config before adding the new volume:
root at cluster-02:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
I add a new thinpool volume using this command:
lvcreate -L 1T --type thin-pool --thinpool vgraid10/lvlxc
root at cluster-02:~# lvcreate -L 1T --type thin-pool --thinpool vgraid10/lvlxc
Logical volume "lvlxc" created.
Which results in this lvs:
root at cluster-02:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta%
Move Log Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvlxc vgraid10 twi-a-tz-- 1.00t 0.00 0.42
[lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
[lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
[lvol0_pmspare] vgraid10 ewi------- 128.00m
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
I then create an unprivileged lxc container using the thinpool:
root at cluster-02:~# lxc-create -B lvm --vgname=vgraid10 --thinpool=lvlxc -t
download -n tmpl-centos-7-unpriv --fssize 16GB -- -d centos -r 7 -a amd64
File descriptor 3 (/var/lib/lxc/tmpl-centos-7-unpriv/partial) leaked on
lvcreate invocation. Parent PID 9118: lxc-create
Logical volume "tmpl-centos-7-unpriv" created.
Using image from local cache
Unpacking the rootfs
...
The lvs output:
root at cluster-02:~# lvs -a
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvlxc vgraid10 twi-aotz-- 1.00t 0.09 0.46
[lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
[lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
[lvol0_pmspare] vgraid10 ewi------- 128.00m
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
tmpl-centos-7-unpriv vgraid10 Vwi-a-tz-- 16.00g lvlxc 5.94
Everything is ok at this point. Now, I will reboot the machine.
root at cluster-02:~# lvs -a
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvlxc vgraid10 twi---tz-- 1.00t
[lvlxc_tdata] vgraid10 Twi------- 1.00t
[lvlxc_tmeta] vgraid10 ewi------- 128.00m
[lvol0_pmspare] vgraid10 ewi------- 128.00m
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
At this point, the volume groups (thinpool and thin volume) are not
active. This causes issues and requires that I manually activate the
volumes:
root at cluster-02:~# lvchange -ay vgraid10/lvlxc
root at cluster-02:~# lvs -a
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvlxc vgraid10 twi-aotz-- 1.00t 0.09 0.46
[lvlxc_tdata] vgraid10 Twi-ao---- 1.00t
[lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m
[lvol0_pmspare] vgraid10 ewi------- 128.00m
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
I have tried setting flags on the thinpool (vgraid10/lvlxc) as follows, to
no avail:
root at cluster-02:~# lvchange -kn vgraid10/lvlxc
root at cluster-02:~# lvchange -ay vgraid10/lvlxc
root at cluster-02:~# lvchange -aye vgraid10/lvlxc
Not sure what I'm missing here. I see this error in /var/log/syslog on
boot:
Jan 29 22:47:51 cluster-02 systemd-udevd[485]: Process 'watershed sh -c
'/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.
I tried digging further into this, and it looks like this is generated from
the following udev file:
root at cluster-02:~# cat /lib/udev/rules.d/85-lvm2.rules
# This file causes block devices with LVM signatures to be automatically
# added to their volume group.
# See udev(8) for syntax
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="lvm*|LVM*", \
RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"
It looks like exit code 127 means 'file not found.' I am guessing that it
is returning this for the watershed executable. I looked and did not see
an executable for watershed:
root at cluster-02:~# find / -name watershed
/usr/share/doc/watershed
/usr/share/initramfs-tools/hooks/watershed
I reinstalled watershed:
root at cluster-02:~# apt-get install --reinstall watershed
...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.120ubuntu6) ...
update-initramfs: Generating /boot/initrd.img-4.2.0-25-generic
...
Now there is a watershed executable:
root at cluster-02:~# find / -name watershed
/usr/share/doc/watershed
/usr/share/initramfs-tools/hooks/watershed
/lib/udev/watershed
Same result after a reboot:
root at cluster-02:~# lvs
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
lvswap vgraid0 -wi-ao---- 29.80g
lvtmp vgraid0 -wi-ao---- 29.80g
lvvartmp vgraid0 -wi-ao---- 29.80g
lvhome vgraid10 -wi-ao---- 29.80g
lvlxc vgraid10 twi---tz-- 1.00t
lvroot vgraid10 -wi-ao---- 7.45g
lvusr vgraid10 -wi-ao---- 7.45g
lvvar vgraid10 -wi-ao---- 3.72g
lvvarcache vgraid10 -wi-ao---- 119.21g
lvvarlib vgraid10 -wi-ao---- 32.00g
lvvarlog vgraid10 -wi-ao---- 14.90g
tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc
Same error in /var/log/syslog:
Jan 29 22:55:25 cluster-02 systemd-udevd[472]: Process 'watershed sh -c
'/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.
Could use some help.
Thanks,
Axton Grams
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-server/attachments/20160129/0c8e481e/attachment.html>
More information about the ubuntu-server
mailing list