<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Feb 15, 2016 at 2:03 PM, Serge Hallyn <span dir="ltr"><<a href="mailto:serge.hallyn@ubuntu.com" target="_blank">serge.hallyn@ubuntu.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">Quoting Axton (<a href="mailto:axton.grams@gmail.com">axton.grams@gmail.com</a>):<br>
> I have an issue where lvm2 thinpool volume groups are not automatically<br>
> activated after reboot.<br>
><br>
> Environmentals:<br>
> Ubuntu Server 15.10, minimal install<br>
> UEFI/Secure Boot in use<br>
> Ubuntu 15.10 (GNU/Linux 4.2.0-25-generic x86_64)<br>
><br>
> root@cluster-02:~# cat /etc/os-release<br>
> NAME="Ubuntu"<br>
> VERSION="15.10 (Wily Werewolf)"<br>
> ID=ubuntu<br>
> ID_LIKE=debian<br>
> PRETTY_NAME="Ubuntu 15.10"<br>
> VERSION_ID="15.10"<br>
> HOME_URL="<a href="http://www.ubuntu.com/" rel="noreferrer" target="_blank">http://www.ubuntu.com/</a>"<br>
> SUPPORT_URL="<a href="http://help.ubuntu.com/" rel="noreferrer" target="_blank">http://help.ubuntu.com/</a>"<br>
> BUG_REPORT_URL="<a href="http://bugs.launchpad.net/ubuntu/" rel="noreferrer" target="_blank">http://bugs.launchpad.net/ubuntu/</a>"<br>
><br>
><br>
> Here is the volume config before adding the new volume:<br>
><br>
> root@cluster-02:~# lvs -a<br>
> LV VG Attr LSize Pool Origin Data% Meta% Move Log<br>
> Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
><br>
> I add a new thinpool volume using this command:<br>
><br>
> lvcreate -L 1T --type thin-pool --thinpool vgraid10/lvlxc<br>
><br>
> root@cluster-02:~# lvcreate -L 1T --type thin-pool --thinpool vgraid10/lvlxc<br>
> Logical volume "lvlxc" created.<br>
><br>
> Which results in this lvs:<br>
><br>
> root@cluster-02:~# lvs -a<br>
> LV VG Attr LSize Pool Origin Data% Meta%<br>
> Move Log Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvlxc vgraid10 twi-a-tz-- 1.00t 0.00 0.42<br>
> [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t<br>
> [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m<br>
> [lvol0_pmspare] vgraid10 ewi------- 128.00m<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
><br>
> I then create an unprivileged lxc container using the thinpool:<br>
><br>
> root@cluster-02:~# lxc-create -B lvm --vgname=vgraid10 --thinpool=lvlxc -t<br>
> download -n tmpl-centos-7-unpriv --fssize 16GB -- -d centos -r 7 -a amd64<br>
> File descriptor 3 (/var/lib/lxc/tmpl-centos-7-unpriv/partial) leaked on<br>
> lvcreate invocation. Parent PID 9118: lxc-create<br>
> Logical volume "tmpl-centos-7-unpriv" created.<br>
> Using image from local cache<br>
> Unpacking the rootfs<br>
> ...<br>
><br>
> The lvs output:<br>
> root@cluster-02:~# lvs -a<br>
> LV VG Attr LSize Pool Origin Data%<br>
> Meta% Move Log Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvlxc vgraid10 twi-aotz-- 1.00t 0.09 0.46<br>
> [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t<br>
> [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m<br>
> [lvol0_pmspare] vgraid10 ewi------- 128.00m<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
> tmpl-centos-7-unpriv vgraid10 Vwi-a-tz-- 16.00g lvlxc 5.94<br>
><br>
><br>
> Everything is ok at this point. Now, I will reboot the machine.<br>
><br>
> root@cluster-02:~# lvs -a<br>
> LV VG Attr LSize Pool Origin Data%<br>
> Meta% Move Log Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvlxc vgraid10 twi---tz-- 1.00t<br>
> [lvlxc_tdata] vgraid10 Twi------- 1.00t<br>
> [lvlxc_tmeta] vgraid10 ewi------- 128.00m<br>
> [lvol0_pmspare] vgraid10 ewi------- 128.00m<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
> tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc<br>
><br>
><br>
> At this point, the volume groups (thinpool and thin volume) are not<br>
> active. This causes issues and requires that I manually activate the<br>
> volumes:<br>
><br>
> root@cluster-02:~# lvchange -ay vgraid10/lvlxc<br>
> root@cluster-02:~# lvs -a<br>
> LV VG Attr LSize Pool Origin Data%<br>
> Meta% Move Log Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvlxc vgraid10 twi-aotz-- 1.00t 0.09 0.46<br>
> [lvlxc_tdata] vgraid10 Twi-ao---- 1.00t<br>
> [lvlxc_tmeta] vgraid10 ewi-ao---- 128.00m<br>
> [lvol0_pmspare] vgraid10 ewi------- 128.00m<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
> tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc<br>
><br>
> I have tried setting flags on the thinpool (vgraid10/lvlxc) as follows, to<br>
> no avail:<br>
><br>
> root@cluster-02:~# lvchange -kn vgraid10/lvlxc<br>
> root@cluster-02:~# lvchange -ay vgraid10/lvlxc<br>
> root@cluster-02:~# lvchange -aye vgraid10/lvlxc<br>
><br>
> Not sure what I'm missing here. I see this error in /var/log/syslog on<br>
> boot:<br>
><br>
> Jan 29 22:47:51 cluster-02 systemd-udevd[485]: Process 'watershed sh -c<br>
> '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.<br>
><br>
> I tried digging further into this, and it looks like this is generated from<br>
> the following udev file:<br>
><br>
> root@cluster-02:~# cat /lib/udev/rules.d/85-lvm2.rules<br>
> # This file causes block devices with LVM signatures to be automatically<br>
> # added to their volume group.<br>
> # See udev(8) for syntax<br>
><br>
> SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="lvm*|LVM*", \<br>
> RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"<br>
><br>
> It looks like exit code 127 means 'file not found.' I am guessing that it<br>
> is returning this for the watershed executable. I looked and did not see<br>
> an executable for watershed:<br>
><br>
> root@cluster-02:~# find / -name watershed<br>
> /usr/share/doc/watershed<br>
> /usr/share/initramfs-tools/hooks/watershed<br>
<br>
</div></div>Interesting. lvm2 Depends: on watershed, so it certainly should<br>
have been installed. You didn't show dpkg -l | grep watershed<br>
output, so I'm not sure whether the file got wrongly deleted on<br>
your host, or whether the package was wrongly not installed.<br>
<div><div class="h5"><br>
> I reinstalled watershed:<br>
> root@cluster-02:~# apt-get install --reinstall watershed<br>
> ...<br>
> update-initramfs: deferring update (trigger activated)<br>
> Processing triggers for initramfs-tools (0.120ubuntu6) ...<br>
> update-initramfs: Generating /boot/initrd.img-4.2.0-25-generic<br>
> ...<br>
><br>
> Now there is a watershed executable:<br>
><br>
> root@cluster-02:~# find / -name watershed<br>
> /usr/share/doc/watershed<br>
> /usr/share/initramfs-tools/hooks/watershed<br>
> /lib/udev/watershed<br>
><br>
> Same result after a reboot:<br>
> root@cluster-02:~# lvs<br>
> LV VG Attr LSize Pool Origin Data%<br>
> Meta% Move Log Cpy%Sync Convert<br>
> lvswap vgraid0 -wi-ao---- 29.80g<br>
> lvtmp vgraid0 -wi-ao---- 29.80g<br>
> lvvartmp vgraid0 -wi-ao---- 29.80g<br>
> lvhome vgraid10 -wi-ao---- 29.80g<br>
> lvlxc vgraid10 twi---tz-- 1.00t<br>
> lvroot vgraid10 -wi-ao---- 7.45g<br>
> lvusr vgraid10 -wi-ao---- 7.45g<br>
> lvvar vgraid10 -wi-ao---- 3.72g<br>
> lvvarcache vgraid10 -wi-ao---- 119.21g<br>
> lvvarlib vgraid10 -wi-ao---- 32.00g<br>
> lvvarlog vgraid10 -wi-ao---- 14.90g<br>
> tmpl-centos-7-unpriv vgraid10 Vwi---tz-- 16.00g lvlxc<br>
><br>
> Same error in /var/log/syslog:<br>
><br>
> Jan 29 22:55:25 cluster-02 systemd-udevd[472]: Process 'watershed sh -c<br>
> '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'' failed with exit code 127.<br>
><br>
> Could use some help.<br>
<br>
</div></div>Maybe change the udev rule to strace that? Cc:d Ryan and<br>
Mike who have dealt more with thinpools than I have.<br>
</blockquote></div><div class="gmail_extra"><br></div>The system has since been destroyed. It was a clean/recent install with updates. If my notes are accurate, I worked around this issue using these steps:<div><br></div><div><div>Enable lvm thin support at boot:</div><div><span class="" style="white-space:pre"> </span>root@cluster-01:~# vi /etc/initramfs-tools/hooks/thin-provisioning-tools</div><div><span class="" style="white-space:pre"> </span>#!/bin/sh</div><div><br></div><div><span class="" style="white-space:pre"> </span>PREREQ="lvm2"</div><div><br></div><div><span class="" style="white-space:pre"> </span>prereqs()</div><div><span class="" style="white-space:pre"> </span>{</div><div><span class="" style="white-space:pre"> </span>echo ""</div><div><span class="" style="white-space:pre"> </span>}</div><div><br></div><div><span class="" style="white-space:pre"> </span>case $1 in</div><div><span class="" style="white-space:pre"> </span>prereqs)</div><div><span class="" style="white-space:pre"> </span>prereqs</div><div><span class="" style="white-space:pre"> </span>exit 0</div><div><span class="" style="white-space:pre"> </span>;;</div><div><span class="" style="white-space:pre"> </span>esac</div><div><br></div><div><span class="" style="white-space:pre"> </span>. /usr/share/initramfs-tools/hook-functions</div><div><br></div><div><span class="" style="white-space:pre"> </span>copy_exec /usr/sbin/thin_check</div><div><span class="" style="white-space:pre"> </span>copy_exec /usr/sbin/thin_dump</div><div><span class="" style="white-space:pre"> </span>copy_exec /usr/sbin/thin_repair</div><div><span class="" style="white-space:pre"> </span>copy_exec /usr/sbin/thin_restore</div><div><span class="" style="white-space:pre"> </span>copy_exec /sbin/dmeventd</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2thin.so</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2.so.2.02</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2mirror.so</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2snapshot.so</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/device-mapper/libdevmapper-event-lvm2raid.so</div><div><span class="" style="white-space:pre"> </span>copy_exec /lib/x86_64-linux-gnu/liblvm2cmd.so.2.02</div><div><br></div><div><span class="" style="white-space:pre"> </span>manual_add_modules dm_thin_pool</div><div><span class="" style="white-space:pre"> </span></div><div><span class="" style="white-space:pre"> </span>root@cluster-01:~# chmod 755 /etc/initramfs-tools/hooks/thin-provisioning-tools</div><div><span class="" style="white-space:pre"> </span>root@cluster-01:~# update-initramfs -u</div></div><div><br></div><div><div>Issues were sufficient with LXC on Ubuntu 15.10 that I gave up and moved on to greener fields. I can't remember if it was this specific issue or if I had issues creating unprivileged centos 7 containers.<br></div></div><div><br></div><div>The issue was consistently reproduce-able by:</div><div>1. Install Ubuntu 15.10 with only OpenSSH Server</div><div>2. Update the system</div><div>3. install linux-tools-generic apt-file thin-provisioning-tools</div><div>4. install lxc lockfile-progs</div><div>5. Create LVM Thinpool for containers</div><div><span class="" style="white-space:pre"> </span>root@cluster-02:~# lvcreate -kn -L 1T -T vgraid10/lvlxc</div><div><span class="" style="white-space:pre"> </span>root@cluster-02:~# lvchange -ay -kn -Ky vgraid10/lvlxc</div><div>6. Create a container (could also just create a thin volume)</div><div><div><span style="white-space:pre"> </span>root@cluster-01:~# lxc-create -B lvm --vgname=vgraid10 --thinpool=lvlxc -t centos -n tmpl-centos-6 -f /etc/lxc/privileged.conf --fssize 16GB -- --release 6</div><div>7. Reboot<br></div></div><div><br></div><div>Axton Grams</div></div></div>