[Bug 1341496] Re: corosync hangs inside libqb

Chris J Arges 1341496 at bugs.launchpad.net
Tue Apr 28 17:16:17 UTC 2015


Sponsored for Trusty.

-- 
You received this bug notification because you are a member of Ubuntu
Sponsors Team, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1341496

Title:
  corosync hangs inside libqb

Status in libqb package in Ubuntu:
  Confirmed
Status in libqb source package in Trusty:
  In Progress
Status in libqb source package in Utopic:
  In Progress

Bug description:
  $ lsb_release -rd
  Description:	Ubuntu 14.04 LTS
  Release:	14.04

  $ apt-cache policy libqb0
  libqb0:
    Installed: 0.16.0.real-1ubuntu3
    Candidate: 0.16.0.real-1ubuntu3
    Version table:
   *** 0.16.0.real-1ubuntu3 0
          500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
          100 /var/lib/dpkg/status

  Corosync sometimes hangs inside libqb. I've looked at a hanged process with gdb, and I think I've found the problem.
  The problem is the loop here: https://github.com/ClusterLabs/libqb/blob/v0.16.0/lib/ringbuffer.c#L451
  This was fixed in 0.17.0, see: https://github.com/ClusterLabs/libqb/blob/v0.17.0/lib/ringbuffer.c#L451

  I think bumping to 0.17.0 should fix this (at least in backports?
  Please?)

  --------------------------------------------------------------------------
  [Impact]

   * libqb does not currently handle ring buffer alloc errors properly. The
     result of this is corosync frequently ending up in an infinite loop
     (consuming 100% cpu) as it continuously tries and fails to allocate
     space from the ringbuffer due to erroneous logic when an attempt to
     reclaim space fails. This patch ensures that when the reclaim fails the
     libqb library gracefully errors out and allows corosync to proceed with
     execution.

   * This is fixed by cherry-picking the following 2 commits:
     - https://github.com/ClusterLabs/libqb/commit/00082df49f045053d03bba7713bfff35d2448459
     - https://github.com/ClusterLabs/libqb/commit/47c690dbbc75957ac2354844b8fbf0a9f4791a87

  [Test Case]

  There is a test case in comment #2.
  A test case that was simple for me to recreate the problem (I used juju to replicate):

  1. Deploy a 2 node percona-cluster w/ corosync and pacemaker.
  2. Scale the number of units from 2 to 5 nodes.
  3. Observe one of the instances of corosync will encounter 100% cpu usage and will not be stuck.

  e.g.
  juju bootstrap
  # install percona-cluster
  juju deploy -n 2 cs:trusty/percona-cluster
  juju deploy cs:trusty/hacluster

  # configure corosync to use unicast for communication
  juju set hacluster corosync_transport=udpu

  # configure the virtual ip for corosync
  juju set percona-cluster vip=<your-vip>

  # cause juju to configure the corosync/pacemaker configuration with percona-cluster.
  juju add-relation percona-cluster hacluster

  # wait for juju debug-log to go quiet.
  # then expand the cluster by 3 nodes.
  juju add-unit -n 3 percona-cluster

  
  [Regression Potential]

   * As a result of the changes, this may cause a blackbox log entry to be
     dropped or it may cause a ring to be discarded and a new ring to be
     created.

     - If a log entry is dropped, some information may be missing from the
       blackbox used later for analysis. However, upstream has determined
       that missing a log entry is more ideal than hanging the corosync
       process.

     - Rings are discarded as part of the normal corosync communication
       process, and corosync already knows how ot properly handle this
       situation so the risk is small in this area.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libqb/+bug/1341496/+subscriptions



More information about the Ubuntu-sponsors mailing list