[SRU Lunar, OEM-6.1, OEM-6.0] io_uring/poll: serialize poll linked timer start with poll removal
Thadeu Lima de Souza Cascardo
cascardo at canonical.com
Tue Jul 4 23:51:49 UTC 2023
From: Jens Axboe <axboe at kernel.dk>
Commit ef7dfac51d8ed961b742218f526bd589f3900a59 upstream.
We selectively grab the ctx->uring_lock for poll update/removal, but
we really should grab it from the start to fully synchronize with
linked timeouts. Normally this is indeed the case, but if requests
are forced async by the application, we don't fully cover removal
and timer disarm within the uring_lock.
Make this simpler by having consistent locking state for poll removal.
Cc: stable at vger.kernel.org # 6.1+
Reported-by: Querijn Voet <querijnqyn at gmail.com>
Signed-off-by: Jens Axboe <axboe at kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
(cherry picked from commit ecc72019f13da7e2217a0cf0ee805785ab5fa374 linux-6.3.y)
CVE-2023-3389
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo at canonical.com>
---
io_uring/poll.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 666666ab2e73..98722021742f 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -975,8 +975,9 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
struct io_hash_bucket *bucket;
struct io_kiocb *preq;
int ret2, ret = 0;
- bool locked;
+ bool locked = true;
+ io_ring_submit_lock(ctx, issue_flags);
preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table, &bucket);
ret2 = io_poll_disarm(preq);
if (bucket)
@@ -988,12 +989,10 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
goto out;
}
- io_ring_submit_lock(ctx, issue_flags);
preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table_locked, &bucket);
ret2 = io_poll_disarm(preq);
if (bucket)
spin_unlock(&bucket->lock);
- io_ring_submit_unlock(ctx, issue_flags);
if (ret2) {
ret = ret2;
goto out;
@@ -1017,7 +1016,7 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
if (poll_update->update_user_data)
preq->cqe.user_data = poll_update->new_user_data;
- ret2 = io_poll_add(preq, issue_flags);
+ ret2 = io_poll_add(preq, issue_flags & ~IO_URING_F_UNLOCKED);
/* successfully updated, don't complete poll request */
if (!ret2 || ret2 == -EIOCBQUEUED)
goto out;
@@ -1025,9 +1024,9 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
req_set_fail(preq);
io_req_set_res(preq, -ECANCELED, 0);
- locked = !(issue_flags & IO_URING_F_UNLOCKED);
io_req_task_complete(preq, &locked);
out:
+ io_ring_submit_unlock(ctx, issue_flags);
if (ret < 0) {
req_set_fail(req);
return ret;
--
2.34.1
More information about the kernel-team
mailing list