[Bug 2078906] Re: Prevent race condition when printing Inode in ll_sync_inode
Dan Bungert
2078906 at bugs.launchpad.net
Tue Sep 24 21:50:33 UTC 2024
Untagging sponsors, please resubscribe when ready.
--
You received this bug notification because you are a member of Ubuntu
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/2078906
Title:
Prevent race condition when printing Inode in ll_sync_inode
Status in ceph package in Ubuntu:
Incomplete
Status in ceph source package in Focal:
Won't Fix
Status in ceph source package in Jammy:
Incomplete
Status in ceph source package in Noble:
Incomplete
Status in ceph source package in Oracular:
Incomplete
Bug description:
[Impact]
In the ll_sync_inode function, the entire Inode structure is printed without holding a lock, which may lead to the following core trace:
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140705682900544) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140705682900544) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140705682900544, signo=signo at entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffa92094476 in __GI_raise (sig=sig at entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffa9207a7f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffa910783c3 in ceph::__ceph_assert_fail (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>) at ./src/common/assert.cc:75
#6 0x00007ffa91078525 in ceph::__ceph_assert_fail (ctx=...) at ./src/common/assert.cc:80
#7 0x00007ffa7049f602 in xlist<ObjectCacher::Object*>::size (this=0x7ffa20734638, this=0x7ffa20734638) at ./src/include/xlist.h:87
#8 operator<< (os=..., out=warning: RTTI symbol not found for class 'StackStringStream<4096ul>'
...) at ./src/osdc/ObjectCacher.h:760
#9 operator<< (out=warning: RTTI symbol not found for class 'StackStringStream<4096ul>'
..., in=...) at ./src/client/Inode.cc:80
#10 0x00007ffa7045545f in Client::ll_sync_inode (this=0x55958b8a5c60, in=in at entry=0x7ffa20734270, syncdataonly=syncdataonly at entry=false) at ./src/client/Client.cc:14717
#11 0x00007ffa703d0f75 in ceph_ll_sync_inode (cmount=cmount at entry=0x55958b0bd0d0, in=in at entry=0x7ffa20734270, syncdataonly=syncdataonly at entry=0) at ./src/libcephfs.cc:1865
#12 0x00007ffa9050ddc5 in fsal_ceph_ll_setattr (creds=<optimized out>, mask=<optimized out>, stx=0x7ff8983f25a0, i=<optimized out>, cmount=<optimized out>)
at ./src/FSAL/FSAL_CEPH/statx_compat.h:209
#13 ceph_fsal_setattr2 (obj_hdl=0x7fecc8fefbe0, bypass=<optimized out>, state=<optimized out>, attrib_set=0x7ff8983f2830) at ./src/FSAL/FSAL_CEPH/handle.c:2410
#14 0x00007ffa92371da0 in mdcache_setattr2 (obj_hdl=0x7fecc9e98778, bypass=<optimized out>, state=0x7fef0d64c9b0, attrs=0x7ff8983f2830)
at ../FSAL/Stackable_FSALs/FSAL_MDCACHE/./src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1012
#15 0x00007ffa922b2bbc in fsal_setattr (obj=0x7fecc9e98778, bypass=<optimized out>, state=0x7fef0d64c9b0, attr=0x7ff8983f2830) at ./src/FSAL/fsal_helper.c:573
#16 0x00007ffa9234c7bd in nfs4_op_setattr (op=0x7fecad7ac510, data=0x7fecac314a10, resp=0x7fecad1be200) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_op_setattr.c:212
#17 0x00007ffa9232e413 in process_one_op (data=data at entry=0x7fecac314a10, status=status at entry=0x7ff8983f2a2c) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_Compound.c:920
#18 0x00007ffa9232f9e0 in nfs4_Compound (arg=<optimized out>, req=0x7fecad491620, res=0x7fecac054580) at ../Protocols/NFS/./src/Protocols/NFS/nfs4_Compound.c:1327
#19 0x00007ffa922cb0ff in nfs_rpc_process_request (reqdata=0x7fecad491620) at ./src/MainNFSD/nfs_worker_thread.c:1508
#20 0x00007ffa92029be7 in svc_request (xprt=0x7fed640504d0, xdrs=<optimized out>) at ./src/svc_rqst.c:1202
#21 0x00007ffa9202df9a in svc_rqst_xprt_task_recv (wpe=<optimized out>) at ./src/svc_rqst.c:1183
#22 0x00007ffa9203344d in svc_rqst_epoll_loop (wpe=0x559594308e60) at ./src/svc_rqst.c:1564
#23 0x00007ffa920389e1 in work_pool_thread (arg=0x7feeb802ea10) at ./src/work_pool.c:184
#24 0x00007ffa920e6b43 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#25 0x00007ffa92178a00 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Upon further analysis of the call trace using GDB, both the _front and _back member variables in xlist<ObjectCacher::Object*> are set to zero, yet an assertion failure is still triggered.
(gdb) frame 7
#7 0x00007ffa7049f602 in xlist<ObjectCacher::Object*>::size (this=0x7ffa20734638, this=0x7ffa20734638) at ./src/include/xlist.h:87
87 ./src/include/xlist.h: No such file or directory.
(gdb) p *this
$1 = {_front = 0x0, _back = 0x0, _size = 0}
(gdb) frame 6
#6 0x00007ffa91078525 in ceph::__ceph_assert_fail (ctx=...) at ./src/common/assert.cc:80
80 ./src/common/assert.cc: No such file or directory.
(gdb) p ctx
$2 = (const ceph::assert_data &) @0x7ffa70587900: {assertion = 0x7ffa70530598 "(bool)_front == (bool)_size", file = 0x7ffa705305b4 "./src/include/xlist.h", line = 87,
function = 0x7ffa7053b410 "size_t xlist<T>::size() const [with T = ObjectCacher::Object*; size_t = long unsigned int]"}
A race condition occurred, leading to abnormal behavior in the
judgment.
[Fix]
It may not be necessary to print the entire Inode structure; simply printing the inode number should be sufficient.
There is an upstream commit that fixes this issue:
commit 2b78a5b3147d4e97be332ca88d286aec0ce44dc3
Author: Chengen Du <chengen.du at canonical.com>
Date: Mon Aug 12 18:17:37 2024 +0800
client: Prevent race condition when printing Inode in
ll_sync_inode
In the ll_sync_inode function, the entire Inode structure is printed without
holding a lock. This can lead to a race condition when evaluating the assertion
in xlist<ObjectCacher::Object*>::size(), resulting in abnormal behavior.
Fixes: https://tracker.ceph.com/issues/67491
Co-authored-by: dongdong tao <tdd21151186 at gmail.com>
Signed-off-by: Chengen Du <chengen.du at canonical.com>
[Test Plan]
The race condition might be challenging to reproduce, but we can test to ensure that the normal call path functions correctly.
1. Create a Manila share and mount it locally
openstack share type create nfs_share_type False --description "NFS share type"
openstack share create --share-type nfs_share_type --name my_share NFS 1
openstack share list
openstack share access create my_share ip XX.XX.XX.XX/XX
openstack share show my_share
sudo mount -t nfs <-export_location_path-> <-mountpoint->
2. Create a file and change its permissions, ensuring that all functions work correctly without any errors
touch test
chmod 755 test
[Where problems could occur]
The patch only modifies the log to prevent a race condition.
However, if there are any issues with the patch, it could disrupt Ceph's ll_sync_inode functionality, which is utilized when setting attributes on a manila share via the NFS protocol.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2078906/+subscriptions
More information about the Ubuntu-sponsors
mailing list