[UBUNTU OEM-6.0 3/5] io_uring: cmpxchg for poll arm refs release
Thadeu Lima de Souza Cascardo
cascardo at canonical.com
Wed Apr 5 00:08:25 UTC 2023
From: Pavel Begunkov <asml.silence at gmail.com>
Replace atomically substracting the ownership reference at the end of
arming a poll with a cmpxchg. We try to release ownership by setting 0
assuming that poll_refs didn't change while we were arming. If it did
change, we keep the ownership and use it to queue a tw, which is fully
capable to process all events and (even tolerates spurious wake ups).
It's a bit more elegant as we reduce races b/w setting the cancellation
flag and getting refs with this release, and with that we don't have to
worry about any kinds of underflows. It's not the fastest path for
polling. The performance difference b/w cmpxchg and atomic dec is
usually negligible and it's not the fastest path.
Cc: stable at vger.kernel.org
Fixes: aa43477b04025 ("io_uring: poll rework")
Signed-off-by: Pavel Begunkov <asml.silence at gmail.com>
Link: https://lore.kernel.org/r/0c95251624397ea6def568ff040cad2d7926fd51.1668963050.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe at kernel.dk>
(cherry picked from commit 2f3893437a4ebf2e892ca172e9e122841319d675)
CVE-2023-0468
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo at canonical.com>
---
io_uring/poll.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 92e3fdd3caa1..9c8064f32aef 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -509,7 +509,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
unsigned issue_flags)
{
struct io_ring_ctx *ctx = req->ctx;
- int v;
INIT_HLIST_NODE(&req->hash_node);
req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
@@ -577,11 +576,10 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
if (ipt->owning) {
/*
- * Release ownership. If someone tried to queue a tw while it was
- * locked, kick it off for them.
+ * Try to release ownership. If we see a change of state, e.g.
+ * poll was waken up, queue up a tw, it'll deal with it.
*/
- v = atomic_dec_return(&req->poll_refs);
- if (unlikely(v & IO_POLL_REF_MASK))
+ if (atomic_cmpxchg(&req->poll_refs, 1, 0) != 1)
__io_poll_execute(req, 0);
}
return 0;
--
2.34.1
More information about the kernel-team
mailing list