[Bug 1628750] Please test proposed package

Brian Murray brian at ubuntu.com
Wed Nov 16 20:28:46 UTC 2016


Hello James, or anyone else affected,

Accepted ceph into xenial-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/ceph/10.2.3-0ubuntu0.16.04.2 in a
few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed.  In either case, details of your testing will help
us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1628750

Title:
  Please backport fixes from 10.2.3 and tip for RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  Fix Committed
Status in ceph source package in Yakkety:
  Fix Released

Bug description:
  [Impact] 
  In ceph deployments with large numbers of objects (typically generated by use of radosgw for object storage), during recovery options when servers or disks fail, it quite possible for OSD recovering data to hit their suicide timeout and shutdown because of the number of objects each was trying to recover in a single chuck between heartbeats.  As a result, clusters go read-only due to data availability.

  [Test Case]
  Non-trivial to reproduce - see original bug report.

  [Regression Potential] 
  Medium; the fix for this problem is to reduce the number of operations per chunk to 64000, limiting the chance that an OSD will not heatbeat and suicide itself as a result.  This is configurable so can be tuned on a per environment basis. 

  The patch has been accepted into the Ceph master branch, but is not
  currently targetted as a stable fix for Jewel.

  >> Original Bug Report <<

  We've run into significant issues with RadosGW at scale; we have a
  customer who has ½ billion objects in ~20Tb of data and whenever they
  lose an OSD for whatever reason, even for a very short period of time,
  ceph was taking hours and hours to recover.  The whole time it was
  recovering requests to RadosGW were hanging.

  I ended up cherry picking 3 patches; 2 from 10.2.3 and one from trunk:

    * d/p/fix-pg-temp.patch: cherry pick 56bbcb1aa11a2beb951de396b0de9e3373d91c57 from jewel.
    * d/p/only-update-up_thru-if-newer.patch: 6554d462059b68ab983c0c8355c465e98ca45440 from jewel.
    * d/p/limit-omap-data-in-push-op.patch: 38609de1ec5281602d925d20c392ba4094fdf9d3 from master.

  The 2 from 10.2.3 are because pg_temp was implicated in one of the
  longer outages we had. 

  The last one is what I think actually got us to a point where ceph was
  stable and I found it via the following URL chain:

  http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2016-June/010230.html
  -> http://tracker.ceph.com/issues/16128
  -> https://github.com/ceph/ceph/pull/9894
  -> https://github.com/ceph/ceph/commit/38609de1ec5281602d925d20c392ba4094fdf9d3

  With these 3 patches applied the customer has been stable for 4 days
  now but I've yet to restart the entire cluster (only the stuck OSDs)
  so it's hard to be completely sure that all our issues are resolved
  but also which of the patches fixed things.

  I've attached the debdiff I used for reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1628750/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list