[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
Corey Bryant
corey.bryant at canonical.com
Wed May 29 12:17:37 UTC 2019
Thanks for all the details.
I need to confirm this but I think the block.db and block.wal symlinks
are created as a result of 'ceph-volume lvm prepare --bluestore --data
<device> --block.wal <wal-device> --block.db <db-device>'.
That's coded in the ceph-osd charm around here:
https://opendev.org/openstack/charm-ceph-
osd/src/branch/master/lib/ceph/utils.py#L1558
Can you confirm that the symlinks are ok prior to reboot? I'd like to
figure out if they are correctly set up by the charm initially.
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1828617
Title:
Hosts randomly 'losing' disks, breaking ceph-osd service enumeration
Status in systemd package in Ubuntu:
New
Bug description:
Ubuntu 18.04.2 Ceph deployment.
Ceph OSD devices utilizing LVM volumes pointing to udev-based physical devices.
LVM module is supposed to create PVs from devices using the links in /dev/disk/by-dname/ folder that are created by udev.
However on reboot it happens (not always, rather like race condition) that Ceph services cannot start, and pvdisplay doesn't show any volumes created. The folder /dev/disk/by-dname/ however has all necessary device created by the end of boot process.
The behaviour can be fixed manually by running "#/sbin/lvm pvscan
--cache --activate ay /dev/nvme0n1" command for re-activating the LVM
components and then the services can be started.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1828617/+subscriptions
More information about the foundations-bugs
mailing list