Re: [RFC PATCH 0/3] cgroup: fsio throttle controller

From: Andrea Righi
Date: Tue Jan 29 2019 - 13:39:46 EST


On Mon, Jan 28, 2019 at 02:26:20PM -0500, Vivek Goyal wrote:
> On Mon, Jan 28, 2019 at 06:41:29PM +0100, Andrea Righi wrote:
> > Hi Vivek,
> >
> > sorry for the late reply.
> >
> > On Mon, Jan 21, 2019 at 04:47:15PM -0500, Vivek Goyal wrote:
> > > On Sat, Jan 19, 2019 at 11:08:27AM +0100, Andrea Righi wrote:
> > >
> > > [..]
> > > > Alright, let's skip the root cgroup for now. I think the point here is
> > > > if we want to provide sync() isolation among cgroups or not.
> > > >
> > > > According to the manpage:
> > > >
> > > > sync() causes all pending modifications to filesystem metadata and cached file data to be
> > > > written to the underlying filesystems.
> > > >
> > > > And:
> > > > According to the standard specification (e.g., POSIX.1-2001), sync() schedules the writes, but
> > > > may return before the actual writing is done. However Linux waits for I/O completions, and
> > > > thus sync() or syncfs() provide the same guarantees as fsync called on every file in the sysâ
> > > > tem or filesystem respectively.
> > > >
> > > > Excluding the root cgroup, do you think a sync() issued inside a
> > > > specific cgroup should wait for I/O completions only for the writes that
> > > > have been generated by that cgroup?
> > >
> > > Can we account I/O towards the cgroup which issued "sync" only if write
> > > rate of sync cgroup is higher than cgroup to which page belongs to. Will
> > > that solve problem, assuming its doable?
> >
> > Maybe this would mitigate the problem, in part, but it doesn't solve it.
> >
> > The thing is, if a dirty page belongs to a slow cgroup and a fast cgroup
> > issues "sync", the fast cgroup needs to wait a lot, because writeback is
> > happening at the speed of the slow cgroup.
>
> Hi Andrea,
>
> But that's true only for I/O which has already been submitted to block
> layer, right? Any new I/O yet to be submitted could still be attributed
> to faster cgroup requesting sync.

Right. If we could bump up the new I/O yet to be submitted I think we
could effectively prevent the priority inversion problem (the ongoing
writeback I/O should be negligible).

>
> Until and unless cgroups limits are absurdly low, it should not take very
> long for already submitted I/O to finish. If yes, then in practice, it
> might not be a big problem?

I was actually doing my tests with a very low limit (1MB/s both for rbps
and wbps), but this shows the problem very well I think.

Here's what I'm doing:

[ slow cgroup (1Mbps read/write) ]

$ cat /sys/fs/cgroup/unified/cg1/io.max
259:0 rbps=1048576 wbps=1048576 riops=max wiops=max
$ cat /proc/self/cgroup
0::/cg1

$ fio --rw=write --bs=1M --size=32M --numjobs=16 --name=writer --time_based --runtime=30

[ fast cgroup (root cgroup, no limitation) ]

# cat /proc/self/cgroup
0::/

# time sync
real 9m32,618s
user 0m0,000s
sys 0m0,018s

With this simple test I can easily trigger hung task timeout warnings
and make the whole system totally sluggish (even the processes running
in the root cgroup).

When fio ends, writeback is still taking forever to complete, as you can
see by the insane amount that sync takes to complete.

-Andrea