[Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

Edward Hope-Morley edward.hope-morley at canonical.com
Tue Sep 8 15:15:02 UTC 2015


** Description changed:

+ [Impact]
+ 
+  * Ensure attching already attached volume to second instance does not
+    interfere with attached instance volume record.
+ 
+ [Test Case]
+ 
+  * Create cinder volume vol1 and two instances vm1 and vm2
+ 
+  * Attach vol1 to vm1 and check that attach was successful by doing:
+ 
+    - cinder list
+    - nova show <vm1>
+ 
+    e.g. http://paste.ubuntu.com/12314443/
+ 
+  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
+    first attach is unaffected (as above). You also check the Nova db as
+    follows:
+ 
+    select * from block_device_mapping where source_type='volume' and \
+        (instance_uuid='<vm1>' or instance_uuid='<vm2>');
+ 
+    from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
+    shows that vol1 is attached to vm1 and vm2 attach failed.
+ 
+  * finally detach vol1 from vm1 and ensure that it succeeds.
+ 
+ [Regression Potential]
+ 
+  * none
+ 
+ ---- ---- ---- ----
+ 
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the volume
  is deleted however it is not necessarily the one that was just created.
  The following steps show how a volume can get stuck detaching because of
  this.
- 
  
  $ nova list
  c+--------------------------------------+--------+--------+------------+-------------+------------------+
  | ID                                   | Name   | Status | Task State | Power State | Networks         |
  +--------------------------------------+--------+--------+------------+-------------+------------------+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -          | Running     | private=10.0.0.2 |
  +--------------------------------------+--------+--------+------------+-------------+------------------+
  
  $ cinder list
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+
  |                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable | Attached to |
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   |     lvm1    |  false   |             |
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +----------+--------------------------------------+
  | Property | Value                                |
  +----------+--------------------------------------+
  | device   | /dev/vdb                             |
  | id       | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +----------+--------------------------------------+
  
  $ cinder list
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
  |                  ID                  | Status |  Name  | Size | Volume Type | Bootable |             Attached to              |
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   |     lvm1    |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)
  
  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  
  $ cinder list
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+
  |                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable |             Attached to              |
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 |  1   |     lvm1    |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+
  
- 
- 
  2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     payload)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     pass
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     self._detach_volume(context, instance, bdm)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     connection_info = jsonutils.loads(bdm.connection_info)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return json.loads(strutils.safe_decode(s, encoding), **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     raise TypeError("%s can't be decoded" % type(text))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1349888

Title:
  [SRU] Attempting to attach the same volume multiple times can cause
  bdm record for existing attachment to be deleted.

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  In Progress
Status in nova source package in Trusty:
  In Progress

Bug description:
  [Impact]

   * Ensure attching already attached volume to second instance does not
     interfere with attached instance volume record.

  [Test Case]

   * Create cinder volume vol1 and two instances vm1 and vm2

   * Attach vol1 to vm1 and check that attach was successful by doing:

     - cinder list
     - nova show <vm1>

     e.g. http://paste.ubuntu.com/12314443/

   * Attach vol1 to vm2 and check that attach fails and, crucially, that the
     first attach is unaffected (as above). You also check the Nova db as
     follows:

     select * from block_device_mapping where source_type='volume' and \
         (instance_uuid='<vm1>' or instance_uuid='<vm2>');

     from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
     shows that vol1 is attached to vm1 and vm2 attach failed.

   * finally detach vol1 from vm1 and ensure that it succeeds.

  [Regression Potential]

   * none

  ---- ---- ---- ----

  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the
  volume is deleted however it is not necessarily the one that was just
  created. The following steps show how a volume can get stuck detaching
  because of this.

  $ nova list
  c+--------------------------------------+--------+--------+------------+-------------+------------------+
  | ID                                   | Name   | Status | Task State | Power State | Networks         |
  +--------------------------------------+--------+--------+------------+-------------+------------------+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -          | Running     | private=10.0.0.2 |
  +--------------------------------------+--------+--------+------------+-------------+------------------+

  $ cinder list
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+
  |                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable | Attached to |
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   |     lvm1    |  false   |             |
  +--------------------------------------+-----------+--------+------+-------------+----------+-------------+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +----------+--------------------------------------+
  | Property | Value                                |
  +----------+--------------------------------------+
  | device   | /dev/vdb                             |
  | id       | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +----------+--------------------------------------+

  $ cinder list
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
  |                  ID                  | Status |  Name  | Size | Volume Type | Bootable |             Attached to              |
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   |     lvm1    |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  +--------------------------------------+--------+--------+------+-------------+----------+--------------------------------------+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)

  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4

  $ cinder list
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+
  |                  ID                  |   Status  |  Name  | Size | Volume Type | Bootable |             Attached to              |
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 |  1   |     lvm1    |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  +--------------------------------------+-----------+--------+------+-------------+----------+--------------------------------------+

  2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher [req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message handling: <type 'NoneType'> can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 406, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     payload)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/exception.py", line 71, in wrapped
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 291, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     pass
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 277, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 319, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 307, in decorated_function
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 4363, in detach_volume
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     self._detach_volume(context, instance, bdm)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/compute/manager.py", line 4309, in _detach_volume
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     connection_info = jsonutils.loads(bdm.connection_info)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/jsonutils.py", line 176, in loads
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     return json.loads(strutils.safe_decode(s, encoding), **kwargs)
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/nova/nova/openstack/common/strutils.py", line 134, in safe_decode
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher     raise TypeError("%s can't be decoded" % type(text))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher TypeError: <type 'NoneType'> can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349888/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list