Re: [PATCH v1] io_uring: Use io_schedule* in cqring wait

From: Jens Axboe
Date: Fri Jul 07 2023 - 13:12:00 EST


On 7/7/23 10:20?AM, Andres Freund wrote:
> I observed poor performance of io_uring compared to synchronous IO. That
> turns out to be caused by deeper CPU idle states entered with io_uring,
> due to io_uring using plain schedule(), whereas synchronous IO uses
> io_schedule().
>
> The losses due to this are substantial. On my cascade lake workstation,
> t/io_uring from the fio repository e.g. yields regressions between 20%
> and 40% with the following command:
> ./t/io_uring -r 5 -X0 -d 1 -s 1 -c 1 -p 0 -S$use_sync -R 0 /mnt/t2/fio/write.0.0
>
> This is repeatable with different filesystems, using raw block devices
> and using different block devices.
>
> Use io_schedule_prepare() / io_schedule_finish() in
> io_cqring_wait_schedule() to address the difference.
>
> After that using io_uring is on par or surpassing synchronous IO (using
> registered files etc makes it reliably win, but arguably is a less fair
> comparison).
>
> There are other calls to schedule() in io_uring/, but none immediately
> jump out to be similarly situated, so I did not touch them. Similarly,
> it's possible that mutex_lock_io() should be used, but it's not clear if
> there are cases where that matters.

This looks good to me, and I also separately tested a similar patch and
it showed good results for me even with a heavily performance oriented
setup:

pread2 io_uring io_uring w/io_sched
QD1 185K 170K 186K
QD2 NA 304K 327K
QD4 NA 630K 640K
QD8 NA 891K 892K

I'll add this, with just one small minor cosmetic edit:

> @@ -2575,6 +2575,9 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx)
> static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
> struct io_wait_queue *iowq)
> {
> + int ret;
> + int token;

Should just be a single line.

And I'll mark this for stable as well. Thanks!

--
Jens Axboe