[Bug 1923668] Please test proposed package

Chris MacNaughton 1923668 at bugs.launchpad.net
Mon Oct 25 15:08:04 UTC 2021


Hello Michael, or anyone else affected,

Accepted openvswitch into rocky-proposed. The package will build now and
be available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:rocky-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-rocky-needed to verification-rocky-done. If it does
not fix the bug for you, please add a comment stating that, and change
the tag to verification-rocky-failed. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/rocky
       Status: Triaged => Fix Committed

** Tags added: verification-rocky-needed

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to openvswitch in Ubuntu.
https://bugs.launchpad.net/bugs/1923668

Title:
  Upgrade from Queens to Rocky results in dead ovs-vswitchd services

Status in OpenStack neutron-openvswitch charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in openvswitch package in Ubuntu:
  Fix Released
Status in openvswitch source package in Focal:
  Fix Released

Bug description:
  While upgrading a cloud from Queens to Rocky I attempted to flush a
  hypervisor to avoid service disruption on the final unit of nova-
  compute using live-migrate. The action queues up in the dashboard
  however it completes with the instance remaining on the same host.
  Looking into the nova-compute logs from that instance it seems that
  the target host could not create the tap:

  /var/log/nova/nova-compute.log:

  2021-04-13 21:12:50.464 1286276 WARNING nova.compute.resource_tracker [req-b1cea8db-be1e-4252-9e31-c78d097ad671 - - - - -] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Instance not resizing, skipping migration.
  2021-04-13 21:12:50.658 1286276 INFO nova.compute.resource_tracker [req-b1cea8db-be1e-4252-9e31-c78d097ad671 - - - - -] Final resource view: name=flagler.playground.solutionsqa phys_ram=32123MB used_ram=18432MB phys_disk=361GB used_disk=20GB total_vcpus=12 used_vcpus=1 pci_stats=[]
  2021-04-13 21:13:02.025 1286276 ERROR nova.virt.libvirt.driver [req-06db27eb-b304-4969-b1e2-cbd0d80094ca d966ea789bfe431fb5863da1e72d6e49 80545c41a5db45d98d6adf7083c4914b - 9580fece017f4adf9b4ff1aa2bf836c8 9580fece017f4adf9b4ff1aa2bf836c8] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Live Migration failure: internal error: Unable to add port tap9c8d13c9-8a to OVS bridge br-int: libvirtError: internal error: Unable to add port tap9c8d13c9-8a to OVS bridge br-int
  2021-04-13 21:13:02.187 1286276 ERROR nova.virt.libvirt.driver [req-06db27eb-b304-4969-b1e2-cbd0d80094ca d966ea789bfe431fb5863da1e72d6e49 80545c41a5db45d98d6adf7083c4914b - 9580fece017f4adf9b4ff1aa2bf836c8 9580fece017f4adf9b4ff1aa2bf836c8] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Migration operation has aborted
  2021-04-13 21:13:02.364 1286276 INFO nova.compute.manager [req-06db27eb-b304-4969-b1e2-cbd0d80094ca d966ea789bfe431fb5863da1e72d6e49 80545c41a5db45d98d6adf7083c4914b - 9580fece017f4adf9b4ff1aa2bf836c8 9580fece017f4adf9b4ff1aa2bf836c8] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Swapping old allocation on 5a94928b-fb98-401f-bdd9-aa2f9f08602c held by migration 44727a6b-3417-4df3-9ca9-5b52e2e0f487 for instance
  2021-04-13 21:13:04.381 1286276 WARNING nova.compute.manager [req-2f77835b-38ab-45b9-8acd-38a98ff3fcfc 6cad752c2b9744d6aac17fb26522004c d1aed1922a5a4a7899cae3e3afb6bc90 - c1a08b45ef134260be7501e96bc9ee3d c1a08b45ef134260be7501e96bc9ee3d] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Received unexpected event network-vif-unplugged-9c8d13c9-8a96-49e0-834a-3c512f1990cb for instance with vm_state active and task_state None.
  2021-04-13 21:13:05.836 1286276 WARNING nova.compute.manager [req-66d4ddc6-4ac8-4c1a-8007-582d599da366 6cad752c2b9744d6aac17fb26522004c d1aed1922a5a4a7899cae3e3afb6bc90 - c1a08b45ef134260be7501e96bc9ee3d c1a08b45ef134260be7501e96bc9ee3d] [instance: e341e106-5bec-4048-a76e-03ef0c70441c] Received unexpected event network-vif-plugged-9c8d13c9-8a96-49e0-834a-3c512f1990cb for instance with vm_state active and task_state None.

  Looking at the target unit the ovs-vsswitchd service is not even
  running on a number of the units:
  https://pastebin.ubuntu.com/p/YhdTQRRGb4/

  Restarting the ovs-vsswitchd service on those hosts restores the
  ability to migrate.

  In each attempt the source of the instance was flagler and the
  destination was everitt which are machines 6 and 3 in the attached
  crashdump respectively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1923668/+subscriptions




More information about the Ubuntu-openstack-bugs mailing list