[Bug 1704106] Re: [SRU] Gathering of thin provisioning stats breaks c-vol
OpenStack Infra
1704106 at bugs.launchpad.net
Mon Mar 25 22:21:26 UTC 2019
Reviewed: https://review.openstack.org/644232
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=b1a0d62f357e431a6b74d38440a8392de972b824
Submitter: Zuul
Branch: stable/ocata
commit b1a0d62f357e431a6b74d38440a8392de972b824
Author: Gorka Eguileor <geguileo at redhat.com>
Date: Tue Feb 6 14:54:57 2018 +0100
RBD: Don't query Ceph on stats for exclusive pools
Collecting stats for provisioned_capacity_gb takes a long time since we
have to query each individual image for the provisioned size. If we are
using the pool just for Cinder and/or are willing to accept a potential
deviation in Cinder stats we could just not retrieve this information
and calculate this based on the DB information for the volumes.
This patch adds configuration option `rbd_exclusive_cinder_pool` that
allows us to disable the size collection and thus improve the stats
reporting speed.
Change-Id: I32c7746fa9149bce6cdec96ee9aa87b303de4271
Closes-Bug: #1704106
(cherry picked from commit f33baccc3544cbda6cd5908328a56096046657ed)
(cherry picked from commit 21821c16580377c4e6443d0b440f41cb7de0ca8d)
(cherry picked from commit 1dca272d8f47bc180cc481e6c6a835eda0bb06a8)
** Changed in: cloud-archive/ocata
Status: New => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1704106
Title:
[SRU] Gathering of thin provisioning stats breaks c-vol
Status in Cinder:
Fix Released
Status in Ubuntu Cloud Archive:
New
Status in Ubuntu Cloud Archive ocata series:
Fix Committed
Status in Ubuntu Cloud Archive pike series:
Fix Released
Bug description:
[Impact]
Backport config option added in Queens to allow disabling the
collection of stats from all rbd volumes since this causes
tons of non-fatal race conditions and slows down deletes to
the point where the rpc thread pool fills up blocking further
requests. Our charms do not configure pool by default and we
are not aware of anyone doing this in the field so this patch
enables this option by default.
[Test Case]
By default no change in behaviour should occur. To test the
new feature we need to enable it i.e.:
* deploy openstack ocata
* set rbd_exclusive_cinder_pool = true in cinder.conf
* create 100 volumes via cinder
* also create 100 volumes from the cinder pool but using the rbd client directly
* delete cinder volumes (via cinder) and delete the non-cinder rbd volumes using rbd client
* ensure there are no exceptions in cinder-volume.log
[Regression Potential]
The default behaviour is unchanged so no regression is expected.
==========================================================================
The gathering of the thin provisioning stats is done by looping over
all volumes:
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L369
For larger deployments, this loop (done at start-up, upon volume
deletion and as periodic a task) is taking too long and hence breaks
the c-vol service.
From what I understand, the overall idea of this stats gathering is to
bring the current real fill status of the pool to the admin's
attention in case over-subscription was configured. For this, a fill
status at the pool level (rather than the volume level) should be good
enough.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1704106/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list