[Bug 1844195] Re: beegfs-meta lockup with glibc 2.27 on bionic

Ekrem SEREN 1844195 at bugs.launchpad.net
Sat Sep 21 05:13:45 UTC 2019


Hi, we have the same issue. I can confirm that after attaching gdb to
the beegfs-meta process, it resumes normal operation.

On our system it seems the issue is recuring more or less around 24h.

Release: Ubuntu 18.04.1 bionic
Kernel: 4.15.0-39-generic
libc6: 2.27-3ubuntu1
beegfs: 7.1.1

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to glibc in Ubuntu.
https://bugs.launchpad.net/bugs/1844195

Title:
  beegfs-meta lockup with glibc 2.27 on bionic

Status in glibc package in Ubuntu:
  New

Bug description:
  Bug report: Lock up of beegfs-meta with glibc 2.27

  Affected system:

  Release: Ubuntu 18.04.3 bionic
  Kernel: 4.15.0-62-generic
  libc6: 2.27-3ubuntu1
  beegfs: 7.1.3

  We have discovered an issue we believe to be a bug in the version of glibc in
  Ubuntu 18.04 that causes a beegfs-meta service to lock up and become
  unresponsive. (https://www.beegfs.io/)

  The issue has also been observed in three other installations, all running
  Ubuntu 18.04 and does not occur on Ubuntu 16.04 or RHEL/CentOS 6 or 7.

  The affected processes resume normal operation almost immediately after a
  debugger like strace or gdb is attached to the process and then continue to run
  normally for some time until they get stuck again. In the short period between
  attaching strace and the process resuming normal operation we see messages like

  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unavailable)
  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unavailable)
  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unavailable)
  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unavailable)

  and a CPU load of 100% on one core, and after the process gets unstuck

  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unava
  ilable)
  38371 futex(0x5597341d9ca8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 282, NULL, 0xffffffff) = -1 EAGAIN (Resource temporarily unava
  ilable)
  38371 futex(0x5597341d9cb0, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 3, NULL, 0xffffffff <unfinished ...>
  38231 futex(0x5597341d9cb0, FUTEX_WAKE_PRIVATE, 2147483647) = 2
  38371 <... futex resumed> )             = 0
  38371 futex(0x5597341d9cb0, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 3, NULL, 0xffffffff <unfinished ...>

  We found this [1] patch to glibc that might be related to the issue and built
  our own version of the official glibc package with only the following diff
  applied to it. All other changes in the patch only touch tests and modify the
  Makefile to build those tests and the changelog, so we decided to skip these
  for the sake of being able to apply the patch cleanly to the Ubuntu glibc.

  index 5dd5342..85fc1bc 100644 (file)
  --- a/nptl/pthread_rwlock_common.c
  +++ b/nptl/pthread_rwlock_common.c
  @@ -314,7 +314,7 @@ __pthread_rwlock_rdlock_full (pthread_rwlock_t *rwlock,
                   harmless because the flag is just about the state of
                   __readers, and all threads set the flag under the same
                   conditions.  */
  -             while ((atomic_load_relaxed (&rwlock->__data.__readers)
  +             while (((r = atomic_load_relaxed (&rwlock->__data.__readers))
                        & PTHREAD_RWLOCK_RWAITING) != 0)
                  {
                    int private = __pthread_rwlock_get_private (rwlock);

  Unfortunately the lockups did not stop after we installed the patched package
  versions and restarted our services. The only thing we noticed was that during
  the lockups, we could not observe high CPU load any more.

  We were able to record backtraces of all of the threads in our stuck processes
  before and after applying the patch. The traces are attached to this report.

  Additionally, to discard other reasons, we explored the internal mutexes and
  condition variables to check for dead(live)locks produced at the application
  level (BeeGFS routines). We could not find any.

  If you need additional information or testing, we would be happy to provide you
  with what we can to help solve this issue.

  [1]
  https://sourceware.org/git/?p=glibc.git;a=commit;h=f21e8f8ca466320fed38bdb71526c574dae98026

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1844195/+subscriptions



More information about the foundations-bugs mailing list