[Bug 1697729] Re: port allocator allocates the same SPICE port for multiple guests (race condition)

ChristianEhrhardt 1697729 at bugs.launchpad.net
Mon Jun 19 14:40:04 UTC 2017


FYI qemu/libvirt tests now have this and a few other concurrent start/stop tests to check for known (this) and unknown races - the architecture should be easy to extend for more cases as we want to add them (modify uvt template + function = test).
Stage 3 is not yet part of daily but will become added once it passed a certain maturity level.

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1697729

Title:
  port allocator allocates the same SPICE port for multiple guests (race
  condition)

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Zesty:
  Triaged
Status in libvirt source package in Artful:
  Fix Released

Bug description:
  [Impact]

   * VMs start to fail depending on a race around spice port allocation

   * Solution is the Backport of an upstream fix that avoids a double
     release on the ports

  [Test Case]

   * Prepare a set of VMs using spice and start them concurrently.
  $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=xenial
  $ sed 's/vnc/spice/' /usr/share/uvtool/libvirt/template.xml > spice-template.xml
  $ for idx in {1..20}; do uvt-kvm create --template spice-template.xml --password=ubuntu test-${idx} release=xenial arch=amd64 label=daily; done
  $ for idx in {1..20}; do virsh shutdown test-${idx}; done
  # wait until all are gone
  $ for idx in {1..20}; do (virsh start test-${idx} &); done
  $ for idx in {1..20}; do virsh domdisplay test-${idx} ; done | sort

  * expectation - all work, ports are used one by one
  * current status - failing to intialize:
    error: internal error: process exited while connecting to monitor: ((null):31733): Spice-Warning **: reds.c:2493:reds_init_socket: reds_init_socket: binding socket to 127.0.0.1:5901 failed

  
  [Regression Potential]

   * It is race after all, so we might miss some corner cases in the
     testing, but reviewing the patch and given the verifications so far it
     should be safe. From the patch the change is like:
       Old: Spice-Init -> Cleanup -> Release [...] QemuStop -> Release
                                               ^
                  If new alloc in this time it was released unintentionally
       New: Spice-Init -> Fail            [...] QemuStop -> Release
     This eliminates the race, but still releases the port as intended.

   * This change only affects users of spice ports.

  [Other Info]

   * n/a

  ---

  Using the UCA ocata release of libvirt we sporatically recieve this
  error message in nova-compute.log:

  2017-06-12 14:32:54.359 19007 ERROR nova.compute.manager [instance:
  d1af2a13-0a53-4d9c-ada3-683e4973f28a] libvirtError: internal error:
  process exited while connecting to monitor: ((null):63256): Spice-
  Warning **: reds.c:2463:reds_init_socket: reds_init_socket: binding
  socket to 10.141.112.21:5900 failed

  Please backport the fix for the following bug into UCA ocata/pike releases:
  https://bugzilla.redhat.com/show_bug.cgi?id=1397440

  The patch is documented here:
  https://www.spinics.net/linux/fedora/libvir/msg144093.html

  We've tested backporting this same fix using the ocata UCA libvirt
  2.5.0-3ubuntu5~cloud0 source package and it fixes the problem for us.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1697729/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list