[Bug 1878752] Related fix merged to charm-ceph-osd (master)
OpenStack Infra
1878752 at bugs.launchpad.net
Mon May 18 09:22:36 UTC 2020
Reviewed: https://review.opendev.org/728488
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=b1aab5d0e12e433b714e39f78945baf16e508a41
Submitter: Zuul
Branch: master
commit b1aab5d0e12e433b714e39f78945baf16e508a41
Author: James Page <james.page at ubuntu.com>
Date: Fri May 15 17:00:25 2020 +0100
Trigger udev rescan if pv_dev disappears
Workaround for kernel by in Ubuntu 20.04 LTS.
When using by-dname device paths with MAAS and bcache, the pvcreate
operation results in the by-dname entry for the block device being
deleted. The subsequent vgcreate then fails as the path cannot
be found.
Trigger a rescan of block devices if the pv_dev path does not
exists after the pvcreate operation.
Change-Id: If7e11f6bd1effd2d5fc2dc5abbaba6865104006f
Depends-On: Ifb16c47ae5ff316cbcfc3798de3446a3774fa012
Related-Bug: 1878752
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1878752
Title:
vgcreate fails on /dev/disk/by-dname block devices
Status in OpenStack ceph-osd charm:
New
Status in curtin package in Ubuntu:
New
Status in lvm2 package in Ubuntu:
New
Bug description:
Ubuntu Focal, OpenStack Charmers Next Charms.
juju run-action --wait ceph-osd/0 add-disk osd-devices=/dev/disk/by-
dname/bcache2
unit-ceph-osd-0:
UnitId: ceph-osd/0
id: "5"
message: exit status 1
results:
ReturnCode: 1
Stderr: |
partx: /dev/disk/by-dname/bcache2: failed to read partition table
Failed to find physical volume "/dev/bcache1".
Failed to find physical volume "/dev/bcache1".
Device /dev/disk/by-dname/bcache2 not found.
Traceback (most recent call last):
File "/var/lib/juju/agents/unit-ceph-osd-0/charm/actions/add-disk", line 79, in <module>
request = add_device(request=request,
File "/var/lib/juju/agents/unit-ceph-osd-0/charm/actions/add-disk", line 34, in add_device
charms_ceph.utils.osdize(device_path, hookenv.config('osd-format'),
File "lib/charms_ceph/utils.py", line 1497, in osdize
osdize_dev(dev, osd_format, osd_journal,
File "lib/charms_ceph/utils.py", line 1570, in osdize_dev
cmd = _ceph_volume(dev,
File "lib/charms_ceph/utils.py", line 1705, in _ceph_volume
cmd.append(_allocate_logical_volume(dev=dev,
File "lib/charms_ceph/utils.py", line 1965, in _allocate_logical_volume
lvm.create_lvm_volume_group(vg_name, pv_dev)
File "hooks/charmhelpers/contrib/storage/linux/lvm.py", line 104, in create_lvm_volume_group
check_call(['vgcreate', volume_group, block_device])
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['vgcreate', 'ceph-911bc34b-4634-4ebd-a055-876b978d0b0a', '/dev/disk/by-dname/bcache2']' returned non-zero exit status 5.
Stdout: |2
Physical volume "/dev/disk/by-dname/bcache2" successfully created.
status: failed
timing:
completed: 2020-05-15 06:04:41 +0000 UTC
enqueued: 2020-05-15 06:04:30 +0000 UTC
started: 2020-05-15 06:04:39 +0000 UTC
The same action on the /dev/bcacheX device succeeds - looks like some
sort of behaviour break in Ubuntu.
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1878752/+subscriptions
More information about the foundations-bugs
mailing list