[Bug 1602057] Re: [SRU] (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

JuanJo Ciarlante 1602057 at bugs.launchpad.net
Fri Apr 7 18:04:29 UTC 2017


FYI we're also hitting this on trusty/mitaka for what looks
like incompletely deleted instances:

* still running at hypervisor, ie
virsh dominfo UUID  # shows it ok

* deleted both at nova 'instances' and 'block_device_mapping' tables.

Once certain it's still running at hypervisor, 
our workaround is to revive the instance at nova DB
with something like:

mysql> begin work;
mysql> update instances
  set vm_state='active', deleted=0, deleted_at=NULL
  where uuid='<UUID>';
mysql> update block_device_mapping
  set deleted=0, deleted_at=NULL
  where instance_uuid='<UUID>';
mysql> commit work;

Note also it has happened to us from failed migrations
(ie instance shown at the 'wrong' host at nova DB),
we've fixed those by adding to the 1st SQL

 host='<service_hostname>', node='<hypervisor_hostname>',

with above hostname-s as:
- <service_hostname> from nova service-list
- <hypervisor_hostname> from nova hypervisor-list

-- 
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1602057

Title:
  [SRU] (libvirt) KeyError updating resources for some node, guest.uuid
  is not in BDM list

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Won't Fix
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  Confirmed
Status in nova source package in Xenial:
  Incomplete

Bug description:
  [Impact]

  There currently exists a race condition whereby the compute
  resource_tracker periodic task polls extant instances and checks their
  BDMs which can occur prior to any mappings having yet been created
  e.g. root disk mapping for new instances. This patch ensures that
  instances without any BDMs are skipped.

  [Test Case]
    * deploy Openstack Mitaka with debug logging enabled (not essential but helps)

    * create an instance

    * delete its BDMs - pastebin.ubuntu.com/24287419/

    * watch /var/log/nova/nova-compute.log on hypervisor hosting
  instance and wait for next resource_tracker tick

    * ensure that exception mentioned in LP does not occur (happens
  after "Auditing locally available compute resources for node")

  [Regression Potential]

  The resource tracker information is used by the scheduler when
  deciding which compute hosts are able to have an instances scheduled
  to them. In this case the resource tracker would be skipping instances
  that would contribute to disk overcommit ratios. As such it is
  possible that that scheduler will have momentarily skewed information
  about resource consumption on that compute host until the next
  resource_tracker tick. Since the likelihood of this race condition
  occurring is hopefully slim and provided that users have a reasonable
  frequency for the resource_tracker, the likelihood of this becoming a
  long term problem is low since the issue will always be corrected by a
  subsequent tick (although if the compute host in question were
  saturated that would not be fixed until an instances was deleted or
  migrated).

  [Other]
  Note that this patch did not make it into upstream stable/mitaka branch due to the stable cutoff so the proposal is to carry in the archive (indefinitely).

  --------

  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager [req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager     rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager     resources = self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager     disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in _get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager     local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: '0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions



More information about the Ubuntu-sponsors mailing list