[Bug 1828617] Re: Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

Corey Bryant corey.bryant at canonical.com
Mon Jun 17 19:08:23 UTC 2019


@David, thanks for the update. We could really use some testing of the
current proposed fix if you have a chance. That's in a PPA mentioned
above. The new code will wait for wal/db devices to arrive and has env
vars to adjust wait times - http://docs.ceph.com/docs/mimic/ceph-
volume/systemd/#failure-and-retries.

As for the pvscan issue, I don't think that is related to ceph.

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

Status in ceph package in Ubuntu:
  In Progress

Bug description:
  Ubuntu 18.04.2 Ceph deployment.

  Ceph OSD devices utilizing LVM volumes pointing to udev-based physical devices.
  LVM module is supposed to create PVs from devices using the links in /dev/disk/by-dname/ folder that are created by udev.
  However on reboot it happens (not always, rather like race condition) that Ceph services cannot start, and pvdisplay doesn't show any volumes created. The folder /dev/disk/by-dname/ however has all necessary device created by the end of boot process.

  The behaviour can be fixed manually by running "#/sbin/lvm pvscan
  --cache --activate ay /dev/nvme0n1" command for re-activating the LVM
  components and then the services can be started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1828617/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list