[PATCH AUTOSEL 6.4 03/15] io_uring: annotate offset timeout races

From: Sasha Levin
Date: Sun Jul 02 2023 - 15:41:28 EST


From: Pavel Begunkov <asml.silence@xxxxxxxxx>

[ Upstream commit 5498bf28d8f2bd63a46ad40f4427518615fb793f ]

It's racy to read ->cached_cq_tail without taking proper measures
(usually grabbing ->completion_lock) as timeout requests with CQE
offsets do, however they have never had a good semantics for from
when they start counting. Annotate racy reads with data_race().

Reported-by: syzbot+cb265db2f3f3468ef436@xxxxxxxxxxxxxxxxxxxxxxxxx
Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
Link: https://lore.kernel.org/r/4de3685e185832a92a572df2be2c735d2e21a83d.1684506056.git.asml.silence@xxxxxxxxx
Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
io_uring/timeout.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index fc950177e2e1d..350eb830b4855 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -594,7 +594,7 @@ int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
goto add;
}

- tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
+ tail = data_race(ctx->cached_cq_tail) - atomic_read(&ctx->cq_timeouts);
timeout->target_seq = tail + off;

/* Update the last seq here in case io_flush_timeouts() hasn't.
--
2.39.2