[Bug 1493931] Re: cinder.conf 'host' not set when using cinder-ceph subordinate

Edward Hope-Morley edward.hope-morley at canonical.com
Thu Sep 10 09:45:31 UTC 2015


** Description changed:

  If I deploy 3 nodes of cinder, relate them with cinder-ceph then relate
  cinder-ceph with ceph everything works fine and dandy except that my
  cinder.conf looks like - http://paste.ubuntu.com/12321902/
  
  The problem being that 'hosts' is not set so it will get a default value
  (unit hostname) e.g. http://paste.ubuntu.com/12321914/
  
  The consequence of this is that if a volume create goes to cinder/0 and
  that node subsequently dies, i will not be able to perform actions e.g.
  delete on that volume anymore until cinder/0 comes back up.
  
  The simple fix is obviously to have cinder set host properly when rbd
  backends (and only stateless backends) are related but it will require
- exisitng volumes to be updated by modifying the provider_location field
- in each Volume record in the Cinder database to match the cinder service
- name
+ existing volumes to be updated by modifying the 'host' field in each
+ Volume record in the Cinder database to match the cinder service name

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to cinder in Juju Charms Collection.
Matching subscriptions: charm-bugs
https://bugs.launchpad.net/bugs/1493931

Title:
  cinder.conf 'host' not set when using cinder-ceph subordinate

Status in cinder package in Juju Charms Collection:
  In Progress
Status in cinder-ceph package in Juju Charms Collection:
  In Progress

Bug description:
  If I deploy 3 nodes of cinder, relate them with cinder-ceph then
  relate cinder-ceph with ceph everything works fine and dandy except
  that my cinder.conf looks like - http://paste.ubuntu.com/12321902/

  The problem being that 'hosts' is not set so it will get a default
  value (unit hostname) e.g. http://paste.ubuntu.com/12321914/

  The consequence of this is that if a volume create goes to cinder/0
  and that node subsequently dies, i will not be able to perform actions
  e.g. delete on that volume anymore until cinder/0 comes back up.

  The simple fix is obviously to have cinder set host properly when rbd
  backends (and only stateless backends) are related but it will require
  existing volumes to be updated by modifying the 'host' field in each
  Volume record in the Cinder database to match the cinder service name

To manage notifications about this bug go to:
https://bugs.launchpad.net/charms/+source/cinder/+bug/1493931/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list