[Bug 1731819] [NEW] rbd snap_unprotect deadlock

zhengxiang 1731819 at bugs.launchpad.net
Mon Nov 13 01:33:35 UTC 2017


Public bug reported:

Hello everyone:

I'm using openstack mitaka and ceph-jewel-10.2.10 to do snapshot
actions. And sometimes it occurs below deadlock condition.

ps -ef | grep cinder-volume
gdb -q python-dbg -p xx

I found 2 frames are racing the lock:

Thread 14 (Thread 0x7f510784c700 (LWP 759193)):
#0  0x00007f513272603e in pthread_rwlock_wrlock () from /lib64/libpthread.so.0
#1  0x00007f5112a4a83c in RWLock::get_write (this=0x5db1258, lockdep=<optimized out>) at ./common/RWLock.h:123
#2  0x00007f5112ad77c5 in WLocker (lock=..., this=<synthetic pointer>) at ./common/RWLock.h:183
#3  librbd::image::RefreshRequest<librbd::ImageCtx>::apply (this=this at entry=0x7f507c02bf10) at librbd/image/RefreshRequest.cc:855
#4  0x00007f5112ad87f8 in librbd::image::RefreshRequest<librbd::ImageCtx>::handle_v2_apply (this=0x7f507c02bf10, result=result at entry=0x7f510784bb2c) at librbd/image/RefreshRequest.cc:655
#5  0x00007f5112ad89ab in librbd::util::detail::C_StateCallbackAdapter<librbd::image::RefreshRequest<librbd::ImageCtx>, &librbd::image::RefreshRequest<librbd::ImageCtx>::handle_v2_apply, true>::complete (this=0x7f507c31e7b0, r=0) at ./librbd/Utils.h:66
#6  0x00007f5112a3eb54 in ContextWQ::process (this=0x6e96b20, ctx=0x7f507c31e7b0) at ./common/WorkQueue.h:611
#7  0x00007f5112c37a7e in ThreadPool::worker (this=0x7c222b0, wt=0x60fe290) at common/WorkQueue.cc:128
#8  0x00007f5112c38950 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:448
#9  0x00007f5132722dc5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f5131d4873d in clone () from /lib64/libc.so.6


Thread 1 (Thread 0x7f5132f10740 (LWP 2617826)):
#0  0x00007f51327266d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f5112a14b60 in Wait (mutex=..., this=0x7ffd0e2ba8f0) at ./common/Cond.h:56
#2  C_SaferCond::wait (this=this at entry=0x7ffd0e2ba890) at ./common/Cond.h:202
#3  0x00007f5112ab5b8e in librbd::Operations<librbd::ImageCtx>::snap_unprotect (this=0x562bdb0, snap_name=snap_name at entry=0x58fa894 "snapshot-90259d85-3edc-40e7-b306-cfff1b855cd6")
    at librbd/Operations.cc:1079
#4  0x00007f51129fb0d4 in rbd_snap_unprotect (image=0x5db10c0, snap_name=snap_name at entry=0x58fa894 "snapshot-90259d85-3edc-40e7-b306-cfff1b855cd6") at librbd/librbd.cc:2385
#5  0x00007f511c32f427 in __pyx_pf_3rbd_5Image_50unprotect_snap (__pyx_v_self=0x7017260, __pyx_v_self=0x7017260, __pyx_v_name=0x58fa870) at rbd.c:12928
#6  __pyx_pw_3rbd_5Image_51unprotect_snap (__pyx_v_self=0x7017260, __pyx_v_name=<optimized out>) at rbd.c:12843
#7  0x00007f5132a1ba62 in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
......


The full backtrace is in the attachment.

Thancks a lot if anyone can give advise, ^ _ ^

** Affects: ceph (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: deadlock rbd

** Attachment added: "deadlock backtrace"
   https://bugs.launchpad.net/bugs/1731819/+attachment/5008007/+files/rbd_unprotect_deadlock.txt

-- 
You received this bug notification because you are a member of Ubuntu
OpenStack, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1731819

Title:
  rbd snap_unprotect deadlock

Status in ceph package in Ubuntu:
  New

Bug description:
  Hello everyone:

  I'm using openstack mitaka and ceph-jewel-10.2.10 to do snapshot
  actions. And sometimes it occurs below deadlock condition.

  ps -ef | grep cinder-volume
  gdb -q python-dbg -p xx

  I found 2 frames are racing the lock:

  Thread 14 (Thread 0x7f510784c700 (LWP 759193)):
  #0  0x00007f513272603e in pthread_rwlock_wrlock () from /lib64/libpthread.so.0
  #1  0x00007f5112a4a83c in RWLock::get_write (this=0x5db1258, lockdep=<optimized out>) at ./common/RWLock.h:123
  #2  0x00007f5112ad77c5 in WLocker (lock=..., this=<synthetic pointer>) at ./common/RWLock.h:183
  #3  librbd::image::RefreshRequest<librbd::ImageCtx>::apply (this=this at entry=0x7f507c02bf10) at librbd/image/RefreshRequest.cc:855
  #4  0x00007f5112ad87f8 in librbd::image::RefreshRequest<librbd::ImageCtx>::handle_v2_apply (this=0x7f507c02bf10, result=result at entry=0x7f510784bb2c) at librbd/image/RefreshRequest.cc:655
  #5  0x00007f5112ad89ab in librbd::util::detail::C_StateCallbackAdapter<librbd::image::RefreshRequest<librbd::ImageCtx>, &librbd::image::RefreshRequest<librbd::ImageCtx>::handle_v2_apply, true>::complete (this=0x7f507c31e7b0, r=0) at ./librbd/Utils.h:66
  #6  0x00007f5112a3eb54 in ContextWQ::process (this=0x6e96b20, ctx=0x7f507c31e7b0) at ./common/WorkQueue.h:611
  #7  0x00007f5112c37a7e in ThreadPool::worker (this=0x7c222b0, wt=0x60fe290) at common/WorkQueue.cc:128
  #8  0x00007f5112c38950 in ThreadPool::WorkThread::entry (this=<optimized out>) at common/WorkQueue.h:448
  #9  0x00007f5132722dc5 in start_thread () from /lib64/libpthread.so.0
  #10 0x00007f5131d4873d in clone () from /lib64/libc.so.6

  
  Thread 1 (Thread 0x7f5132f10740 (LWP 2617826)):
  #0  0x00007f51327266d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
  #1  0x00007f5112a14b60 in Wait (mutex=..., this=0x7ffd0e2ba8f0) at ./common/Cond.h:56
  #2  C_SaferCond::wait (this=this at entry=0x7ffd0e2ba890) at ./common/Cond.h:202
  #3  0x00007f5112ab5b8e in librbd::Operations<librbd::ImageCtx>::snap_unprotect (this=0x562bdb0, snap_name=snap_name at entry=0x58fa894 "snapshot-90259d85-3edc-40e7-b306-cfff1b855cd6")
      at librbd/Operations.cc:1079
  #4  0x00007f51129fb0d4 in rbd_snap_unprotect (image=0x5db10c0, snap_name=snap_name at entry=0x58fa894 "snapshot-90259d85-3edc-40e7-b306-cfff1b855cd6") at librbd/librbd.cc:2385
  #5  0x00007f511c32f427 in __pyx_pf_3rbd_5Image_50unprotect_snap (__pyx_v_self=0x7017260, __pyx_v_self=0x7017260, __pyx_v_name=0x58fa870) at rbd.c:12928
  #6  __pyx_pw_3rbd_5Image_51unprotect_snap (__pyx_v_self=0x7017260, __pyx_v_name=<optimized out>) at rbd.c:12843
  #7  0x00007f5132a1ba62 in PyEval_EvalFrameEx () from /lib64/libpython2.7.so.1.0
  ......

  
  The full backtrace is in the attachment.

  Thancks a lot if anyone can give advise, ^ _ ^

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1731819/+subscriptions



More information about the Ubuntu-openstack-bugs mailing list