[Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

Launchpad Bug Tracker 1518430 at bugs.launchpad.net
Fri Jan 13 00:09:12 UTC 2017


This bug was fixed in the package python-oslo.messaging - 4.6.1-2ubuntu2

---------------
python-oslo.messaging (4.6.1-2ubuntu2) xenial; urgency=medium

  * d/p/rabbit-avoid-busy-loop.patch: Cherry pick patch from upstream
    to avoid rabbit driver busy loop on epoll_wait with heartbeat+eventlet
    (LP: #1518430).

 -- Corey Bryant <corey.bryant at canonical.com>  Fri, 02 Dec 2016 12:51:26
-0500

** Changed in: python-oslo.messaging (Ubuntu Xenial)
       Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to python-oslo.messaging in Ubuntu.
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive kilo series:
  Fix Committed
Status in Ubuntu Cloud Archive liberty series:
  Fix Committed
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in oslo.messaging:
  Fix Released
Status in python-oslo.messaging package in Ubuntu:
  Fix Released
Status in python-oslo.messaging source package in Xenial:
  Fix Released
Status in python-oslo.messaging source package in Yakkety:
  Fix Released
Status in python-oslo.messaging source package in Zesty:
  Fix Released

Bug description:
  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
    openstack-origin: "cloud:trusty-liberty"
    source: "cloud:trusty-updates/liberty

  * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
  - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
    http://paste.ubuntu.com/13371248/ (ltrace, strace)

  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/

  * 'top' sample from a nova-cloud-controller unit from
     this completely idle stack:
    http://paste.ubuntu.com/13371809/

  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.

  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

  Note: The description in the following bug gives a good overview of
  the issue and points to a possible fix for oslo.messaging:
  https://bugs.launchpad.net/mos/+bug/1380220

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1518430/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list