[Bug 1518430] Related fix merged to oslo.messaging (master)
OpenStack Infra
1518430 at bugs.launchpad.net
Wed Dec 18 23:58:15 UTC 2019
Reviewed: https://review.opendev.org/558882
Committed: https://git.openstack.org/cgit/openstack/oslo.messaging/commit/?id=d873c0d8f5722d5e2a08b5fc49d58b63dbe061a0
Submitter: Zuul
Branch: master
commit d873c0d8f5722d5e2a08b5fc49d58b63dbe061a0
Author: John Eckersberg <jeckersb at redhat.com>
Date: Wed Apr 4 12:55:44 2018 -0400
Do not use threading.Event
Waiting on a threading.Event with eventlet can cause busy looping via
epoll_wait, see related bug for more details.
Change-Id: I007613058a2d21d1712c02fa6d1602b63705c1ab
Related-bug: #1518430
--
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to Ubuntu Cloud Archive.
https://bugs.launchpad.net/bugs/1518430
Title:
liberty: ~busy loop on epoll_wait being called with zero timeout
Status in Ubuntu Cloud Archive:
Fix Committed
Status in Ubuntu Cloud Archive kilo series:
Fix Committed
Status in Ubuntu Cloud Archive liberty series:
Fix Released
Status in Ubuntu Cloud Archive mitaka series:
Fix Released
Status in Ubuntu Cloud Archive newton series:
Fix Released
Status in oslo.messaging:
Fix Released
Status in python-oslo.messaging package in Ubuntu:
Fix Released
Status in python-oslo.messaging source package in Xenial:
Fix Released
Status in python-oslo.messaging source package in Yakkety:
Fix Released
Status in python-oslo.messaging source package in Zesty:
Fix Released
Bug description:
Context: openstack juju/maas deploy using 1510 charms release
on trusty, with:
openstack-origin: "cloud:trusty-liberty"
source: "cloud:trusty-updates/liberty
* Several openstack nova- and neutron- services, at least:
nova-compute, neutron-server, nova-conductor,
neutron-openvswitch-agent,neutron-vpn-agent
show almost busy looping on epoll_wait() calls, with zero timeout set
most frequently.
- nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
http://paste.ubuntu.com/13371248/ (ltrace, strace)
As comparison, this is how it looks on a kilo deploy:
- http://paste.ubuntu.com/13371635/
* 'top' sample from a nova-cloud-controller unit from
this completely idle stack:
http://paste.ubuntu.com/13371809/
FYI *not* seeing this behavior on keystone, glance, cinder,
ceilometer-api.
As this issue is present on several components, it likely comes
from common libraries (oslo concurrency?), fyi filed the bug to
nova itself as a starting point for debugging.
Note: The description in the following bug gives a good overview of
the issue and points to a possible fix for oslo.messaging:
https://bugs.launchpad.net/mos/+bug/1380220
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1518430/+subscriptions
More information about the Ubuntu-openstack-bugs
mailing list