[Bug 1493931] Related fix merged to charm-cinder (master)
OpenStack Infra
1493931 at bugs.launchpad.net
Mon May 23 18:50:19 UTC 2016
Reviewed: https://review.openstack.org/290066
Committed: https://git.openstack.org/cgit/openstack/charm-cinder/commit/?id=b299cc84be8b6c399415fd5ee4b3ff2dc9db0492
Submitter: Jenkins
Branch: master
commit b299cc84be8b6c399415fd5ee4b3ff2dc9db0492
Author: Jorge Niedbalski <jorge.niedbalski at canonical.com>
Date: Tue Mar 8 15:19:08 2016 -0300
Cleanup action for service-list after deploying HA.
This is a workaround for LP: #1493931 in order to keep the
output of cinder service-list clean after deploying a HA.
The rationale behind this is to expose a way to cleanup the
services table on the database from unused ones ,
those services were
created by cinder before the storage relation is joined (particularly
for stateless ones).
This action also exposes the host option to specify
the host to be removed.
By default if no host is provided, this action will
cleanup all the entries different to the ones
specified on the DEFAULT_SERVICES constant.
An example of execution can be found on the comment
section of this proposal.
Change-Id: I4a5e682e44206f7b77d873cb1fc63e3eae86aad5
Related-Bug: 1493931
Signed-off-by: Jorge Niedbalski <jorge.niedbalski at canonical.com>
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to cinder in Juju Charms Collection.
Matching subscriptions: charm-bugs
https://bugs.launchpad.net/bugs/1493931
Title:
cinder.conf 'host' not set when using cinder-ceph subordinate
Status in cinder package in Juju Charms Collection:
Fix Released
Status in cinder-ceph package in Juju Charms Collection:
Fix Released
Bug description:
If I deploy 3 nodes of cinder, relate them with cinder-ceph then
relate cinder-ceph with ceph everything works fine and dandy except
that my cinder.conf looks like - http://paste.ubuntu.com/12321902/
The problem being that 'hosts' is not set so it will get a default
value (unit hostname) e.g. http://paste.ubuntu.com/12321914/
The consequence of this is that if a volume create goes to cinder/0
and that node subsequently dies, i will not be able to perform actions
e.g. delete on that volume anymore until cinder/0 comes back up.
The simple fix is obviously to have cinder set host properly when rbd
backends (and only stateless backends) are related but it will require
existing volumes to be updated by modifying the 'host' field in each
Volume record in the Cinder database to match the cinder service name
To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/cinder/+bug/1493931/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list